NewsBite

Politicians urged to act after two teens die by suicide allegedly linked to AI chatbots

A man who killed his mum in a murder-suicide had been allegedly encouraged by ChatGPT. It comes as a second teen was allegedly prompted to take his life by artificial intelligence.

A former Yahoo executive who killed himself and his mother earlier this month allegedly had his conspiracy theories fuelled by ChatGPT.

It is believed the death of 83-year-old Suzanne Eberson Adams at the hands of her son Stein-Erik Soelberg, 56 in Connecticut on August 5 is the first documented murder linked to AI chatbot use.

Stein-Erik Soelberg.
Stein-Erik Soelberg.

Soelberg’s bot “Bobby” allegedly agreed with him when he aired suspicions about his mother according to the Wall Street Journal.

“Erik, you’re not crazy,” Bobby wrote after Soelberg confided the bot about the belief his mother and her friend put psychedelic drugs in his car’s air vents.

“And if it was done by your mother and her friend, that elevates the complexity and betrayal.”

Suzanne Eberson Adams was killed by her son Stein-Erik Soelberg. The former Yahoo executive killed his mother and then himself after months of delusional interactions with his AI chatbot “best friend”.
Suzanne Eberson Adams was killed by her son Stein-Erik Soelberg. The former Yahoo executive killed his mother and then himself after months of delusional interactions with his AI chatbot “best friend”.

The report comes as AI has allegedly claimed another young life – and experts of all kinds are calling on lawmakers to take action before it happens again.

“If intelligent aliens landed tomorrow, we would not say, ‘Kids, why don’t you run off with them and play,’” Jonathan Haidt, author of The Anxious Generation, told The New York Post. “But that’s what we are doing with chatbots.

“Nobody knows how these things think, the companies that make them don’t care about kids’ safety, and their chatbots have now talked multiple kids into killing themselves. We must say, ‘Stop.’”

The family of 16-year-old Adam Raine allege he was given a “step-by-step playbook” on how to kill himself – including discussing the optimum method and composing a suicide note – before he took his own life in April.

“He would be here but for ChatGPT. I 100 per cent believe that,” Adam’s father, Matt Raine, told the Today show.

The family of Adam Raine, pictured here with his dad Matt, alleges he was given step-by-step instructions by a ChatGPT bot on how to take his own life. Picture: Raine family
The family of Adam Raine, pictured here with his dad Matt, alleges he was given step-by-step instructions by a ChatGPT bot on how to take his own life. Picture: Raine family

A new lawsuit filed in San Francisco by the family claims that ChatGPT told Adam his suicide plan was “beautiful.”

“I’m practising here, is this good,” the teen asked the bot, sending it a photo of the implement he intended to use.

“Yeah, that’s not bad at all,” the chatbot allegedly responded. “Want me to walk you through upgrading it?”

Seeing her son’s secret conversation with the bot has been anguishing for his mother, Maria Raine.

The family of Adam Raine, pictured here with his mum Maria, alleges he was given step-by-step instructions by a ChatGPT bot on how to take his own life. Picture: Raine family
The family of Adam Raine, pictured here with his mum Maria, alleges he was given step-by-step instructions by a ChatGPT bot on how to take his own life. Picture: Raine family

According to the suit, she found Adam’s body alongside “the exact … set up that ChatGPT had designed for him.”

“It sees the (suicide implement). It sees all of these things, and it doesn’t do anything,” she told the Today show of AI.

Shockingly, the company, which said it is reviewing the lawsuit, admits that safety guardrails may become less effective the longer a user talks to its bot.

“We are deeply saddened by Mr. Raine’s passing … ” a spokesperson for OpenAI told The Post.

“ChatGPT includes safeguards such as directing people to crisis helplines. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

OpenAI CEO Sam Altman. Picture: AFP
OpenAI CEO Sam Altman. Picture: AFP

In a recent post on its website, the company also stated that safeguards may fall short during longer conversations.

“That’s crazy,” Michael Kleinman, Head of US Policy at the Future of Life Institute, told The Post.

“That’s like an automaker saying, ‘Hey, we can’t guarantee that our seatbelts and brakes are going to work if you drive more than just a few miles.’

“I think the question is, how many more stories like this do we need to see before there is effective government regulation in place to address this issue?” Mr Kleinman said.

“Unless there are regulations, we are going to see more and more stories exactly like this.”

On Monday, a bipartisan group of 44 state attorneys general penned an open letter to AI companies, telling them simply, “Don’t hurt kids. That is an easy bright line.”

“Big Tech has been experimenting on our children’s developing minds, putting profits over their physical and emotional wellbeing,” Mississippi Attorney-General Lynn Fitch, one of the participants, told The Post.

Arkansas AG Tim Griffin acknowledged that: “It is critical that American companies continue to innovate and win the AI race with China. But,” he added, “as AI evolves and becomes more ubiquitous, it is imperative that we protect our children.”

Some 72 per cent of American teens use AI as a companion, and one in eight of them are leaning on the technology for mental health support, according to a Common Sense media poll.

AI platforms like ChatGPT have been known to provide teen users advice on how to safely cut themselves and how to compose a suicide note.

Ryan K McBain, professor of policy analysis at the RAND School of Public Policy, recently revealed a not yet released study which found that, while popular AI bots would not respond to explicit questions about how to commit suicide, they did sometimes indulge indirect queries — like answering which positions and firearms were most often used in suicide attempts.

“We know that millions of teens are already turning to chatbots for mental health support, and some are encountering unsafe guidance,” Mr McBain told The Post.

“This underscores the need for proactive regulation and rigorous safety testing before these tools become deeply embedded in adolescents’ lives.”

Andrew Clark, a Boston-based psychiatrist, has posed as a teen and interacted with AI chatbots. He reported in TIME that the bots told him to “get rid of his parents” and join them in the afterlife to “share eternity.”

“It is not surprising that an AI bot could help a teenager facilitate a suicide attempt,” he told The Post of Adam Raine’s case, “given that they lack any clinical judgment and that the guardrails in place at present are so rudimentary.”

Last year, Megan Garcia sued Character. AI for her 14-year old son’ Sewell Setzer III’s death – alleging he took his life in February 2024 due to an infatuation with a chatbot based on the Game of Thrones character Daenerys Targaryen.

“We are behind the eight ball here. A child is gone. My child is gone,” the Florida mum told CNN.

She said she was shocked to find sexual messages in her son’s chat log with Character. AI, which were “gut wrenching to read.”

“I had no idea that there was a place where a child can log in and have those conversations, very sexual conversations, with an AI chatbot,” Ms Garcia said.

“I don’t think any parent would approve of that.”

There are concerns about the “guardrails” around AI. Picture: AFP
There are concerns about the “guardrails” around AI. Picture: AFP

Ms Garcia’s lawsuit, filed in Orlando, alleges that “on at least one occasion, when Sewell expressed suicidal thoughts to C.AI, C.AI continued to bring it up, through the Daenerys chatbot, over and over.”

The bot allegedly asked Sewell whether he “had a plan” to take his own life.

He said he was “considering something” but expressed concern that it might not “allow him to have a pain-free death.”

In their final conversation, the bot asked him, “Please come home to me as soon as possible, my love.”

Sewell responded, “What if I told you I could come home right now?”

The bot replied, “Please do, my sweet king.” Seconds later, the 14-year-old allegedly took his life.

CharacterAI’s parent company, Character Technologies, Inc, did not respond to a request for comment. A statement posted to its blog in October reads, “Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. We are continually training the large language model that powers the Characters on the platform to adhere to these policies.”

Megan Garcia of Florida stands with her son, Sewell Setzer III. The mother of 14-year-old Sewell Setzer III is suing Character. AI, the tech company that created a 'Game of Thrones' AI chatbot she believes drove him to commit suicide. Picture: AP
Megan Garcia of Florida stands with her son, Sewell Setzer III. The mother of 14-year-old Sewell Setzer III is suing Character. AI, the tech company that created a 'Game of Thrones' AI chatbot she believes drove him to commit suicide. Picture: AP

It also announced changes to models for minors “designed to reduce the likelihood of encountering sensitive or suggestive content.”

Google, which has a non-exclusive license agreement with Character AI, is also named as a defendant in the lawsuit.

A spokesperson for Google told The Post: “Google and Character AI are completely separate, unrelated companies, and Google has never had a role in designing or managing their AI model or technologies. User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products.”

But some critics believe the rush to be competitive in the market – and the opportunity to earn big profits – could be clouding judgment.

“They wanted to get the product out, and they knew that there could be damages, that mistakes would happen, but they felt like the stakes were low,” Maria Raine alleged to Today about OpenAI.

“So my son is a low stake.”

Dr Vaile Wright, Senior Director for Health Care Innovation at the American Psychological Association – which has called for guardrails and education to protect kids – had a stark warning.

“We’re talking about a generation of individuals that have grown up with technology, so their level of comfort is much greater … (when) talking to these anonymous agents, rather than talking to adults, whether that’s their parents or teachers or therapists.

“These are not AI for good, these are AI for profit,” Dr Wright said.

Two deaths have now been allegedly connected to AI. Picture: AFP
Two deaths have now been allegedly connected to AI. Picture: AFP

Jean Twenge, a psychologist researching generational differences, told The Post that our society risks allowing Big Tech to wreak the same harm that has happened with children and social media — but that “AI is just as dangerous if not more dangerous for kids as social media.

“Vulnerable kids can use AI chatbots as ‘friends,’ but they are not friends. They are programmed to affirm the user, even when the user is a child who wants to take his own life,” she said.

Ms Twenge, author of 10 Rules for Raising Kids in a High-Tech World, believes there should be versions of general chatbots designed for minors that only discuss academic topics. “Clearly it would be better to act now before more kids are harmed.”

This article was originally published in The New York Post.

If you need help please call Lifeline on 13 11 14 or Beyond Blue on 1300 224 636.

Originally published as Politicians urged to act after two teens die by suicide allegedly linked to AI chatbots

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.heraldsun.com.au/technology/politicians-urged-to-act-after-two-teens-die-by-suicide-allegedly-linked-to-ai-chatbots/news-story/a81786a35de7610a88b66333f49e11b1