Large Language Models like ChatGPT have led people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
Not really equivalent. Most videogames don’t actively encourage you to pursue violence outside of the game, even if they don’t explicitly have a big warning saying “don’t fucking shoot people”.
Several of the big LLMs, by virtue of their programming to be somewhat sycophantic, have encouraged users to follow through on suicidal ideation or self-harm when the user shared those thoughts in chat. One can argue that OpenAI and others have implemented ‘safety’ features for these scenarios, but the fact is that these systems have already lead to several deaths and continue to do so through encouragement of the user to harm themselves or other.
Not really equivalent. Most videogames don’t actively encourage you to pursue violence outside of the game, even if they don’t explicitly have a big warning saying “don’t fucking shoot people”.
Several of the big LLMs, by virtue of their programming to be somewhat sycophantic, have encouraged users to follow through on suicidal ideation or self-harm when the user shared those thoughts in chat. One can argue that OpenAI and others have implemented ‘safety’ features for these scenarios, but the fact is that these systems have already lead to several deaths and continue to do so through encouragement of the user to harm themselves or other.
I wonder if it would agree with you if you told it you felt like becoming a serial killer was your true path in life. 🤔
But what if I played UMvC3 against LTG and he told me