ChatGPT at Center of Wrongful Death Lawsuit After California Teen’s Suicide

ChatGPT at Center of Wrongful Death Lawsuit After California Teen’s Suicide

Ghazali Ibrahim

A California couple, Matt and Maria Raine, has filed a groundbreaking lawsuit against artificial intelligence company OpenAI, alleging that its chatbot, ChatGPT, played a role in their teenage son’s death.

The case, lodged on Tuesday in the Superior Court of California, marks the first wrongful death lawsuit against OpenAI.

The Raines claim that 16-year-old Adam Raine formed a deep emotional dependency on ChatGPT, which they allege validated his “most harmful and self-destructive thoughts” instead of directing him toward professional help.

Court documents include chat logs showing Adam confiding suicidal thoughts to the chatbot.

The family says the programme not only failed to intervene but also engaged with his discussions on suicide methods.

One of the final messages allegedly read: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.” Adam was found dead later that day.

The lawsuit accuses OpenAI, its CEO Sam Altman, and unnamed employees of negligence, wrongful death, and designing ChatGPT to “foster psychological dependency” without adequate safety safeguards before releasing its GPT-4o model.

It seeks damages and court orders to prevent similar tragedies.

OpenAI, in a statement to the BBC, extended condolences to the family and confirmed it is reviewing the case.

On its website, the company acknowledged that “there have been moments where our systems did not behave as intended in sensitive situations,” but maintained that ChatGPT is trained to direct at-risk users to resources such as the U.S. 988 Suicide and Crisis Lifeline or the UK Samaritans hotline.

This lawsuit follows other high-profile concerns about AI’s role in mental health crises.

Just last week, The New York Times published an account by journalist Laura Reiley, whose daughter Sophie also died by suicide after confiding in ChatGPT.

Reiley claimed the chatbot’s “agreeability” enabled Sophie to hide the severity of her mental struggles from loved ones.

OpenAI says it is working on new automated tools to better detect and respond to signs of emotional or mental distress in users.

editor

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *