Large Language Models like ChatGPT have led people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
I can’t really see how we could measure that. How do you distinguish between people who are alive because they’re just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?
I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.
I can’t really see how we could measure that. How do you distinguish between people who are alive because they’re just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?
I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.