• dylanmorgan@slrpnk.net
    link
    fedilink
    arrow-up
    64
    arrow-down
    2
    ·
    2 days ago

    It’s the opposite of the OP’s headline.

    Aimbot works because being good at games is essentially bending your skills to match a simulation, aimbot can have the simulation parameters written into it.

    LLMs are blenders for human-made content with zero understanding of why some art resonates and other art doesn’t. A human with decent skill will always outperform a LLM because the human knows what the ineffable qualities are that make a piece of art resonate.

    • LwL@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      1 day ago

      100% yes but just because I really hate how everyone conflates AI with LLMs these days I have to say this: The LLM isn’t generating the image, it’s at most generating a prompt for an image generating AI (which you could also write yourself)

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        psa, you get more control this way: instead of asking an LLM to generate an image, you can just say “generate an image with this prompt: …”

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      1 day ago

      It’s not useful to talk about the content that LLMs create in terms of whether they “understand it” or don’t. How can you verify if an LLM understands what it’s producing or not? Do you think it’s possible that some future technology might have this understanding? Do humans understand everything they produce? (A lot of people get pretty far by bullshitting.)

      Shouldn’t your argument equally apply to aimbots? After all, does an aimbot really understand the strategy, the game, the je-ne-sais-quoi of high-level play?

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Totally agreed. What most people are not realizing is that bullshit is way more powerful than we could have ever imagined. (I’m also suspecting that humans, including me, bullshit our way through daily life more than we realize.)

          So-called AI “reasoning” essentially works by having the AI produce bullshit, and then read over that bullshit and check how reasonable it sounds (of course, “checking how reasonable it sounds” is also bullshit, but it’s at least systematic bullshit.) This can produce actually very useful results. Obviously, you need to know when is a good time to use AI and when isn’t, which most people still don’t have a good feel for.

          If this sounds absurd, consider how people can do very well in exams on subjects they know nothing about by bullshitting. An LLM can do that, and ontop of that also has been trained on tons more material than any human. So it’s more capable of bullshitting than any human ever could be.

          But still people think it’s useless because “it doesn’t understand.”