• Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    77
    ·
    edit-2
    2 days ago

    Did they ask an LLM how LLM’s work? Because that shit’s fucking farcical. They’re not “traversing” anything, bud. You get 17 different versions because each model is making that shit up on the fly.

    • LeninOnAPrayer@lemm.ee
      link
      fedilink
      English
      arrow-up
      28
      ·
      edit-2
      2 days ago

      Nah see they read thousands of pages in like an hour. That’s why. They just don’t need to anymore because they’re so intelligent and do it the smart way with like models and shit to compress it into a half a page summary that is clearly just as useful.

      Seriously, that’s what they would say.

      They don’t actually understand what LLMs do either. They just think people that do are smart so they press buttons and type prompts and think that’s as good as the software engineer that actually developed the LLMs.

      Seriously. They think they are the same as the people that develop the source code for their webui prompt. And most of society doesn’t understand that difference so they get away with it.

      It’s the equivalent of the dude that trade shitcoins thinking he understands crypto like the guy committing all of the code to actually run it.

      (Or worse they clone a repo and follow a tutorial to change a config file and make their own shitcoins)

      I really think some parts of our tech world need to be made LESS user friendly. Not more.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        It’s people at the peak point of the Dunning-Krugger curve sharing their “wisdom” with the rest of us.

    • Jesus_666@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      There are models designed to read documents and provide summaries; that part is actually realistic. And transforming text (such as by providing a summary) if actually something LLMs are better at than the conversational question answering that’s getting all the hype these days.

      Of course stuffing an entire book in there is going to require a massive context length and would be damn expensive, especially if multiplied by 17. And I doubt it’d be done in a minute.

      And there’s still the hallucination issue, especially with everything then getting filtered through another LLM.

      So that guy is full of shit but at least he managed to mention one reasonable capability of neural nets. Surely that must be because of the 30+ IQ points ChatGPT has added to his brain…