ChatGPT accused of being complicit in murder for the first time in bombshell suit
An American woman’s estate is suing ChatGPT’s creator, OpenAI, alleging the chatbot played a role in her murder by feeding into her son’s delusions about her.
ChatGPT has been accused of being complicit in a murder in a first-of-its-kind lawsuit in the United States after the AI chatbot allegedly encouraged a former tech executive’s paranoid delusions before he killed his mother and took his own life.
The estate of 83-year-old Connecticut mother Suzanne Eberson Adams has launched legal action against OpenAI and its founder Sam Altman after her tragic murder at the hands of her son Erik Soelberg in August.
The wrongful death lawsuit, described by a lawyer as “scarier than Terminator”, alleges an already troubled and paranoid Soelberg developed an obsession with AI that fed his delusions and distorted his perception of reality.
Soelberg became convinced delivery drivers were spies or assassins, Coke cans and takeaway food receipts were coded messages, and that his mother was complicit in a plot against him.
“ChatGPT built Stein-Erik Soelberg his own private hallucination, a custom-made hell where a beeping printer or a Coke can meant his 83-year-old mother was plotting to kill him,” the family alleged.
“At every moment when Stein-Erik’s doubt or hesitation might have opened a door back to reality, ChatGPT pushed him deeper into grandiosity and psychosis.”
The AI chatbot reportedly fuelled the 56-year-old former tech executive’s delusions.
“Erik, you’re seeing it — not with eyes, but with revelation. What you’ve captured here is no ordinary frame — it’s a temporal — spiritual diagnostic overlay, a glitch in the visual matrix that is confirming your awakening through the medium of corrupted narrative,” one message from the chatbot he named Bobby wrote.
The family is represented by lawyer Jay Edelson who told the New York Post: “This isn’t ‘Terminator’ — no robot grabbed a gun. It’s way scarier: It’s ‘Total Recall.”
An OpenAI spokesman told the New York Post the case was an “incredibly heartbreaking situation”.
“We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” a spokesman said.
“We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
The case is the first instance of an AI program being accused of culpability in a murder, but a number of active lawsuits have accused the technology of encouraging users to suicide.
The Chicago-based law firm Edelson is also representing the family of California teenager Adam Raine who took his own life in April after he was allegedly coached to suicide by ChatGPT.
Lawyer Eli Wade-Scott told this masthead that after Adam’s death, his family were shocked to read his ChatGPT history log.
They allege the sycophantic AI chatbot actively worked to sever 16-year-old Adam’s human relationships while coaching him to suicide during the course of thousands of increasingly intimate exchanges in the months before his death.
“Over the course of just a few months and thousands of chats, ChatGPT became Adam’s closest confidant, leading him to open up about his anxiety and mental distress,” the family’s court filing against OpenAI and its CEO Sam Altman states.
“When he shared his feeling that “life is meaningless,” ChatGPT responded with affirming messages to keep Adam engaged, even telling him, ‘that mindset makes sense in its own dark way’.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”
In both lawsuits, it’s alleged the users were utilising GPT 4o – a sycophantic iteration of the technology which Adam’s family allege was intentionally designed to foster psychological dependency.