• Jesus_666@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    There are models designed to read documents and provide summaries; that part is actually realistic. And transforming text (such as by providing a summary) if actually something LLMs are better at than the conversational question answering that’s getting all the hype these days.

    Of course stuffing an entire book in there is going to require a massive context length and would be damn expensive, especially if multiplied by 17. And I doubt it’d be done in a minute.

    And there’s still the hallucination issue, especially with everything then getting filtered through another LLM.

    So that guy is full of shit but at least he managed to mention one reasonable capability of neural nets. Surely that must be because of the 30+ IQ points ChatGPT has added to his brain…