Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.



Full chat log or it never happened.
This is on the fucking Guardian. Not some random green text. Get help.
Weird how you lash out online like this about questioning the content of a chat that allegedly lead to suggesting suicide.
I think you’re the one who needs help.
Weird how you expect intimate details like a full chat log to just be immediately publicly available, when this is currently under litigation. Really weird to basically simp for a corporation when this isn’t even close to the first instance of LLM output encouraging suicide. Almost like your motivations are more closely aligned with theirs instead of average people who are vulnerable. 🤷
Do you think that you could supply me with a chat log where you talk to an LLM without gaming it into telling you to kill yourself, and where it just naturally arrives at that conclusion?
I didn’t think you could. And I don’t think this guy did either.
The fact that you make your own conclusion without waiting for a reply says enough about your intentions. Don’t worry though, you’re not alone in your stance. People like you, who refuse to give empathy except as currency, are an integral part of why the human race is fucked. We will never be anything higher than constantly destroying each other and tearing one another down.
Thanks for doing your part.
I have empathy for people who truly want to commit suicide. I just know you can’t supply any example prompts.
Feel free to prove me wrong. With evidence.
You’re the one who came in pissing and moaning about chat logs. I’m not your babysitter. It’s a big world and you’re a big kid now, go ahead and explore. I have no energy to educate the unwilling. Fuck that.
I’m very aware you’re unable to reproduce the suicidal responses without gaming the AI.
You can keep trying in vain to make me feel bad, but you’re arguing the existence of something that cannot be replicated or proven, like Santa, the Easter bunny, or god.
You’re the one who chose to talk to me… so do it or stop responding lmfao.
“truly want” so killing yourself after being convinced to do so by LLM output means you just had a fake desire to kill yourself, somehow resulting in real death, funny how that works. I would say you need help but there’s no helping people like you.
I’d say you’re determined to put words into my mouth.
Either way, you can’t supply a way to reproduce this.
This isn’t even remotely the first time LLMs have done this to people. Sure it would be nice to see the full log, but disbelieving it on sight is a weird reaction at this point.
It’s not a weird reaction. I’ve never ever had an LLM suggest bodily harm. And so clearly these people are leading it into this direction. I have never ever seen a chat log from one of these accusations, and I haven’t heard of One of these going to trial.
If you feel this happens so frequently, give me a series of prompts to use so that I can replicate this.
And since you won’t, that’s what I thought.
There was another article from a very similar set of circumstances of a man originally from Portland going off the deep end with an AI relationship. He committed suicide by jumping off a bridge, not because a prompt told him to, but because of the deep psychosis from the long term engagement.
The chatlogs as reported were 55,000 pages long.
If those logs become public you’ll have your chance. I hope you don’t wear out your fingers in your attempt to replicate it.
I’m sure the psychosis was there at the beginning, regardless of the AI. I have seen people develop strange behavior after long term engagement…. But, they always gamed the system to do that. It was never natural.
It’s very sad regardless.
“I’ve never won the lottery so clearly nobody does, and news reports about it are fake. You want me to believe it? Then you spend time and money to play and win it, then show me exactly how you won.”
Nevermind that “winning” in this case means dying.
Rare does not mean never. It’s happened enough to be a serious problem already and this is just one more case.
And no, I will not chat with those psychotic machines for you.
Bare assertion / Proof by assertion / Failure to meet the burden of proof / Shifting the burden of proof / Appeal to belief / Appeal to popularity / Argument from ignorance
Yawn.
Fallacy fallacy.
(If I even made those, which I doubt.)
Yawn indeed.
I don’t click links. You just hate AI, and you’re willing to believe anything to support your opinion regardless of evidence. You sound like a MAGAt.
These devices are designed to take whatever you put into them and amplify them back to you from an outside perspective, using a vast database of information and fiction and references to make connections with other things.
It’s the ultimate paranoia/depression distiller. If you only feed it your pain and fears, it will only focus on those things and build narratives around it, because that’s how they work, they just take your prompts (“i’m sad”) and they do what a depressed or paranoid mind already does, but hyper-efficiently: it draws connections and writes stories around it.
People who don’t understand how their own minds work sure aren’t going to understand how artificial minds work, and they will end up creating these reinforcement loops in their own heads and in the LLM, and get utterly lost down deep holes of spiraling delusion and misery.
YOU need to understand this too, so you don’t doubt that this is a very common thing, it’s happening so much that it’s becoming an entire social phenomenon.
I do understand that no one is told to kill themselves without heavy gaming of the AI.
As you probably know, with enough effort you can make the AI tell you what you want it to say.
This isn’t the fault of the AI.
The root problem is lack of mental healthcare and lack of lives worth living (to them) due to the world being a shitting place.
I’m actually saying kind of the opposite, that these things are basically uncontrolled power-suits for whatever is knocking around in the back of your mind. It’s a thought and feeling amplifier. It takes almost no effort for the thing to start building a personality profile of you, but not for any kind of objective analysis, but in order to more efficiently amplify and latch onto whatever issues ideas or feelings you already have.
A lot of people really, really loved this effect from ChatGTP and the recent exodus from OpenAI is partially because of their capitulation to government, but just as much to do with their recent “upgrades” locking the latest model into very safe and political-neutral, deescalating language instead of doing that magic-feeling wild escapism that a lot people who don’t know how the thing works, crave.
Yah it’s not the AI’s fault, but people are woefully unaware of just how things work and what it is exactly that you’re talking to when you chat with these models. A lot of the reason people don’t know how LLM’s work broadly is also because the people who make the LLM’s don’t really know how they work.
It’s happened too many times now to be surprised about it happening.
No one ever shows the logs. That’s because the people were already having mental health issues and gamed the AI to respond how they wanted. This isn’t the fault of AI, it’s the fault of the user. However, I do think all AI should exit the conversation and ban the user if it becomes a discussion about harms or is high fantasy. I’d be fine with a confirmation box appearing saying this is getting crazy and your warranty is void.
Read the lawsuits. The logs are shown.
They’re not AI, they’re pattern completion algorithms. Fancy autocomplete. They’ve caused real life harm to real life people, and no one is taking responsibility. Usually when companies sell a product that hurts people, the product gets recalled. This needs to happen to LLMs.
Ban knifes? Razor blades? Depressing books?
Whatever word you want to use for it, it’s not the machines fault people use it to make themselves sad.
It’s never told me to go kill myself. But I bet if I worked really hard to manipulate it and break it I could get it to say most anything. It that’s not the fault of the machine.
Did I say ban? I said recall. People still sell e.g. cars. Just fix the problems and put them back on the market. Razor blades and knives can be used to hurt people, they don’t spontaneously hurt people, and most parents don’t let their children play with them.
Similarly, other harmful products carry labels, e.g. cigarettes. If someone already has mental health issues then perhaps they shouldn’t use an LLM. Like someone with lung problems, you can’t stop them from smoking, but putting labels on there to warn against the harms is also a way to inform people.
As it is currently, LLMs are marketed as intelligent, they use language like “thinking”, and in much wider terms the people pushing them are saying that they’ll revolutionise everything. They’re not talking about the dangers and that’s a problem.
I’m all for a label to shut everyone up. Good idea.
I am also curious how it could have possibly ended up suggesting that. Like I wonder if he was steering that conversation and the LLM was playing along, if the LLM randomly steered the conversation into the spy and suicide shit, or if someone else was deliberately fucking with this guy via secret text added to the prompts or something.
Though I’m also curious how anyone can get in the mindset where they’d actually go along with that suggestion. Especially with a fucking LLM that probably had a shitload of mistakes and inconsistencies leading up to that point, though even a real person would have lost me long before this shit.
Most people spend zero time examining how they think, so an outside voice is just going to trample all over their agency.
An LLM is JUST a narrative machine, it takes whatever you put into it, and it ties together connections and stories and fictions and associations of all kinds to build a narrative. Our brains do this also, but we have a level of awareness that we can question the stories our brains tell us. An LLM does not think, it’s just weaving stories. It has no concept of what’s real or not, it doesn’t know the difference between a human being and all the data and writing about people. It’s all literally the same to an LLM.
And whatever you engage with in the LLM it will reinforce and enhance, even the most subtle tones and terms, it treats everything you feed it, even your punctuation and moods, as a prompt to find a connection or narrative for.
If you’re already emotionally and mentally compromised, this can be disastrous if you can’t really think straight.
Well the article made it very clear this person had mental issues. In that case, the whole world changes. I mean, people have said their dog told them to kill… so when dealing with A person with schizophrenia, for example, LLM usage can be super dangerous.
Which article are you reading? It explicitly states the opposite…
This is mental illness.