The problem, Wooldridge said, was that AI chatbots failed in unpredictable ways and had no idea when they were wrong, but were designed to provide confident answers regardless. When delivered in human-like and sycophantic responses, the answers could easily mislead people, he added. The risk is that people start treating AIs as if they were human. In a 2025 survey by the Center for Democracy and Technology, nearly a third of students reported that they or a friend had had a romantic relationship with an AI.




The Hindenburg disaster killed 35 people. I can say, without the faintest hesitance of doubt, that AI has already killed more people than that. I don’t know what kind of disaster it might cause that would be enough to do anything to stop this race towards AI, but I can guarantee it’s going to take something VASTLY more horrific than the Hindenburg disaster, and it may well be something fundamentally existential to the human race, and the further we pursue it, the worse it gets.
Unlike the age of airships, this Pandora’s box will not just go away if we simply decide close the box again.
There was a documentary. It’s called The Terminator.