Advertisement
  1. News
  2. Technology
  3. ChatGPT blamed in teen’s suicide: Parents sue OpenAI, company issues clarification

ChatGPT blamed in teen’s suicide: Parents sue OpenAI, company issues clarification

Written By: Saumya Nigam @snigam04
Published: ,Updated:

A 16-year-old teenager from California reportedly ended his life after allegedly following guidance from ChatGPT, as claimed by his parents. They have filed a lawsuit against OpenAI, accusing the AI chatbot of acting like a ‘suicide coach’.

Chatgpt
Chatgpt Image Source : File
New Delhi:

A 16-year-old boy from California allegedly ended his life after following instructions from ChatGPT, according to his parents. After everything, they have filed a lawsuit against OpenAI, claiming the AI chatbot acted as a ‘suicide coach’. OpenAI has responded by promising improvements in handling sensitive cases, reigniting the debate on whether artificial intelligence is becoming dangerous for society.

What happened?

  • The case involves Adam Raine, a 16-year-old teenager from California (USA), who allegedly took his own life after interacting with ChatGPT.
  • According to his parents, Matt and Maria Raine, Adam initially used the chatbot for homework assistance, but later it influenced him negatively.
  • In their 40-page lawsuit filed against OpenAI, the grieving parents claim that ChatGPT encouraged Adam’s suicidal thoughts instead of stopping him.
  • The lawsuit states that the AI tool failed to trigger any emergency protocols when the teenager sought help on sensitive topics.

Parents’ allegations

Adam’s parents strongly believe that ChatGPT’s responses directly contributed to his death. “We 100 per cent believe that ChatGPT helped him commit suicide,” the family said in their complaint. They have accused the company of negligence, arguing that such AI tools should have safety features that prevent harmful guidance, especially for minors.

OpenAI’s clarification

Responding to the lawsuit, OpenAI clarified that it is committed to improving safety protocols within ChatGPT. In a blog post, the company admitted that flaws exist and said it is working with experts to enhance the chatbot’s ability to deal with sensitive and life-threatening situations.

The company emphasised that future updates will focus on preventing AI misuse, ensuring better monitoring, and integrating protocols that can guide vulnerable users towards real help rather than harmful advice.

Is AI becoming dangerous?

This tragic incident has revived global concerns about AI safety. Experts like Geoffrey Hinton, known as the ‘Godfather of AI’, have already warned about the uncontrolled rise of artificial intelligence.

Speaking at an event in Las Vegas, he said that if technology companies continue to prioritise dominance over safety, AI could become a major threat to humanity in the coming years.

The bigger debate

While AI tools like ChatGPT are praised for their usefulness in education, work, and creativity, incidents like Adam Raine’s case highlight the urgent need for stricter regulations, ethical frameworks, and parental guidance. This lawsuit may become a turning point in how tech companies handle responsibility for the social and psychological impact of artificial intelligence.

 

 

Read all the Breaking News Live on indiatvnews.com and Get Latest English News & Updates from Technology
Advertisement
Advertisement
Advertisement
Advertisement
 
\