The problem, Wooldridge said, was that AI chatbots failed in unpredictable ways and had no idea when they were wrong, but were designed to provide confident answers regardless. When delivered in human-like and sycophantic responses, the answers could easily mislead people, he added. The risk is that people start treating AIs as if they were human. In a 2025 survey by the Center for Democracy and Technology, nearly a third of students reported that they or a friend had had a romantic relationship with an AI.




Agree with the problem… not sure how this failure mode is similar to that of the hindenburg. Titanic, “the unsinkable ship”, might be more appropriate.
Who cares then, where we are headed won’t have any icebergs!
According to AI, most of the iceberg is under the water, so it’s not really dangerous for boats, only for submarines.
Won’t anyone think of the fish?