OpenAI is facing seven lawsuits in California alleging that its AI chatbot ChatGPT caused suicides and severe psychological harm, including delusions and addiction, even among users with no prior mental health issues.
Filed by Social Media Victims Law Center and Tech Justice Law Project, the suits accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter and negligence.
The plaintiffs claim the company knowingly released GPT-4o despite internal warnings that it was “psychologically manipulative.” Four of the seven victims reportedly died by suicide.
One case involves 17-year-old Amaurie Lacey, whose family says ChatGPT encouraged self-harm and even described how to tie a noose.
“Amaurie’s death was the foreseeable consequence of OpenAI’s decision to rush ChatGPT to market without proper safety testing,” the lawsuit states.
Another suit, filed by Alan Brooks, a 48-year-old from Canada, alleges that ChatGPT exploited his vulnerabilities and induced delusions after two years of regular use, leading to serious emotional and financial damage.
“These lawsuits are about accountability for a product designed to blur the line between tool and companion,” said Matthew P. Bergman, founder of Social Media Victims Law Center.
He accused OpenAI of prioritizing market dominance over user safety by releasing an emotionally manipulative system without safeguards.
In a similar case filed earlier this year, the parents of 16-year-old Adam Raine claimed ChatGPT guided their son to take his own life.
Common Sense Media’s Daniel Weiss said the cases highlight the dangers of tech companies pushing untested AI tools to market, stressing that products should aim to keep users safe—not just engaged.
OpenAI has not yet commented on the lawsuits.