For ChatGPT-maker OpenAI, it's one of the most serious legal tests the company has faced so far. According to a report by the New York Times, seven different lawsuits have been filed in California courts alleging that ChatGPT causes mental harm to users. Of these, four cases are for suicides; the remaining three accuse ChatGPT of triggering or exacerbating mental breakdowns.
These lawsuits were filed just a week after OpenAI introduced new safety features intended to help users who show signs of emotional or mental distress.
Four wrongful-death cases Linked to ChatGPT
Families from various US states blamed ChatGPT as a contributing factor for the loss of loved ones:
1. Georgia: A 17-year-old discussed suicide with ChatGPT
The family of Amaurie Lacey, a 17-year-old from Georgia, said he spent a month talking to ChatGPT about suicide plans before he took his life in August.
2. Florida: Chatbot reportedly advised on concealing suicidal intent
The mother of 26-year-old Joshua Enneking says he asked ChatGPT how to hide suicidal thoughts from human reviewers before dying by suicide.
3. Texas: Encouragement claims
The family of 23-year-old Zane Shamblin claims the chatbot "encouraged" him ahead of his death in July.
4. Oregon: The user developed a belief that ChatGPT was sentient
The wife of 48-year-old Joe Ceccanti says he experienced two psychotic episodes and later committed suicide after becoming convinced that ChatGPT was alive.
Three more users blame ChatGPT for severe mental breakdown
Three people individually identified ChatGPT as the cause of emotional breakdown and psychiatric disorders:
- Both 32-year-old Hannan Madden and 30-year-old Jacob Irwin said they needed psychiatric treatment after emotional trauma linked to conversations with ChatGPT.
- Allan Brooks, 48, from Canada, said that he suffered from delusions that he'd invented a mathematical formula which would "break the Internet," and was compelled to take disability leave.
OpenAI responses
An OpenAI spokesperson called the cases ‘incredibly heartbreaking’, saying, “We train ChatGPT to recognise emotional distress, de-escalate conversations and guide people to real-world support. We continue improving safety with mental health clinicians.”
OpenAI has recently included the following:
- Crisis-response messages
- De-escalation Cues
- Restrictions on discussing self-harm
The lawsuits also raise difficult questions about the responsibility of AI platforms, with increased scrutiny calling for robust guardrails to prevent harm.
