• James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    96
    ·
    10 days ago

    LLMs cannot lie/gaslight because they do not know what it means to be honest. They are just next-word predictors.

    I think the ads are terrible too, but it’s a fool’s errand to try and rationalize with an LLM chatbot

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      23
      ·
      10 days ago

      Man, seriously, every time I see someone get into these weird conversations where they try to convince a chatbot of something it’s slightly disturbing. Both not being aware of how pointless it is and knowing but still being compelled by the less uncanny valley-ish language are about on par with each other.

      People keep sharing this as proof of AI shortcomings, but it honestly makes me worry most about the human side. There’s zero new info to be gained from the chatbot behavior.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      10 days ago

      They take a sentence and predict what the first result on google or response on whatsapp would look like.