• pelespirit@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    arrow-down
    4
    ·
    edit-2
    2 days ago

    Duckduckgo’s AI search seems to slightly agree with you, meaning Futurism and Engadget:

    Grok, the AI chatbot, initially spread misinformation about Charlie Kirk’s death by claiming he survived the shooting and that videos of the incident were fake. This confusion stemmed from the chatbot’s inability to accurately process breaking news and conflicting information, leading to a series of incorrect statements before eventually acknowledging Kirk’s death.

    That means AI really, really sucks and can be manipulated easily.

    • Cyberspark@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      I think it’s well known at this point that grok in particular has been designed to be easy to manipulate by deliberately keeping it in the dark and feeding it only select information so that Musk can make it say what he wants.

    • Zetta@mander.xyz
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Yes, LLMs, or what people call AI, are absolutely manipulated easily. Just the way you phrase your question can steer it to answer in a particular way. I haven’t been on Twitter in a long time, but I hopped on yesterday and today to check out all of the Kirk memes.

      I saw so many comments from people both happy and upset with Kirk dying, that were asking questions in manipulative ways (to the llm) to try and get the response from grok they want.

      LLMs are indeed horrible for live or recent events, and more importantly horrible for anything that is super important to not get wrong.

      Don’t get me wrong I personally find llms useful and I use open source models occasionally for tasks they are better at, for me typically that means reformatting or compiling shorter notes from documents. Nothing super critical.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      ddg doesn’t run it’s own llm, they’re just a frontend to chatgpt that (allegedly) strips out all the tracking.