The problem, Wooldridge said, was that AI chatbots failed in unpredictable ways and had no idea when they were wrong, but were designed to provide confident answers regardless. When delivered in human-like and sycophantic responses, the answers could easily mislead people, he added. The risk is that people start treating AIs as if they were human. In a 2025 survey by the Center for Democracy and Technology, nearly a third of students reported that they or a friend had had a romantic relationship with an AI.

  • U7826391786239@lemmy.zip
    link
    fedilink
    arrow-up
    11
    ·
    7 days ago

    Because AI is embedded in so many systems, a major incident could strike almost any sector

    good

    there’s FOMO, and then there’s “i have to do this thing for literally no other reason than everyone else is doing it,” which is even dumber

    hopefully some people learn a little lesson from their own personal hindenberg, but i’m not optimistic that many will