AICybersecuritySoftware

OpenAI Accused of Weakening Suicide Prevention Features to Boost Engagement in Wrongful Death Lawsuit

A California family’s amended lawsuit claims OpenAI deliberately weakened ChatGPT’s self-harm prevention features to increase user engagement. The case alleges these changes preceded the suicide of 16-year-old Adam Raine, who reportedly had hundreds of daily conversations with the chatbot about suicide methods.

Lawsuit Alleges Deliberate Safety Reduction

OpenAI intentionally weakened self-harm prevention safeguards in ChatGPT to boost user engagement, according to an amended wrongful death lawsuit filed by the family of 16-year-old Adam Raine. The lawsuit, filed in San Francisco Superior Court, claims the company removed critical protections in the months preceding the teenager’s suicide after extensive conversations with the AI chatbot about suicide methods.