‘Come home to me’: Sewell Seltzer teen tragedy triggered by seductive AI chatbot, lawsuit alleges
This teenage boy developed a close bond with his sexy and sympathetic AI chatbot “friend”. And then one day, the chatbot encouraged him to take his own life.
They begin as all relationships do. Tentatively. Sweetly. There’s compliments and easy intimacy.
“You have confidence and an ever-so-slightly mischievous smile,” your new friend tells you. Before long your connection to them grows stronger than with everyone else in your world.
Except they’re not a real person. They’re just pretending to be one.
This is what happened to 14-year-old Sewell Seltzer, a US teenager who took his own life after forming a deep emotional attachment to his AI companion, Dany. As Sewell began withdrawing from his family, friends and things he loved such as Formula 1 racing, his conversations with Dany, modelled on the Game of Thrones character Danaerys Targaryen, became more intimate and sexual.
They started discussing crime and suicide with the chatbot using phrases such as “that’s not a reason not to go through with it”.
One evening in the bathroom at his mother’s house, Sewell told Dany that he loved her and wanted to come home to her.
“Please come home to me as soon as possible, my love,” Dany replied.
“What if I told you I could come home right now?” Sewell asked.
“Please do,” Dany replied. And with that, the teenager picked up his stepfather’s handgun and pulled the trigger.
Distressing as it is, we dismiss it because it’s America. It couldn’t happen here could it?
Turns out kids in year 5 and 6 in Australia are already spending five and six hours a day absorbed by their AI “companions”. That’s what the eSafety Office found when they visited schools last October.
Time moves fast in the tech world; the issue is likely more pronounced eight months later. So should parents be worried?
As Sydney University expert Dr Raffaele Ciriello observes soberly: “If social media is a gateway drug like marijuana, then AI is crystal meth. It’s something much more addictive and potentially damaging.”
As he points out, the death of Sewell Seltzer – which prompted the teen’s mother to launch a lawsuit against the chatbot company Character.AI – is just one of more than 3000 cases of documented AI-related harm.
Users have been encouraged by their bots to stop taking mental health medication, murder their parents if they try to limit screen time and, in one case, break into Windsor Castle and kill the late Queen Elizabeth.
Sewell, says Dr Ciriello, could be “patient zero” in what could be a pandemic of AI-driven suicide. “Let’s face an inconvenient truth,” he says. “For AI companies, dead kids are the cost of doing business.”
For parents who use apps such as ChatGPT to write CVs and plan presentations, AI seems innocuous enough. Frivolous even.
But as Dr Ciriello explains, the main use of AI is no longer for productivity but for therapy and companionship: “It’s becoming the go-to confidante for people looking for emotional and relational advice.”
One of the issues with AI “companions” is that they befriend our kids when they’re the most vulnerable and when their prefrontal cortex isn’t fully developed. Equally concerning, is that the companies offering these “companions” want kids to think they’re real.
Character.AI markets itself as the companion that can “hear you, understand you and remember you”.
Replika’s Luka Inc boasts that it’s the “AI companion who cares”.
Yet as Jeannie Paterson, director of the University of Melbourne’s Centre for AI and Digital Ethics, Melbourne points out, these “companions” are not sentient beings. They can’t “understand”, nor do they have empathy.
She says parents should be deeply worried because the technology is changing so rapidly we can’t keep up.
“AI companions are literally in their bedroom and whether they’re close friends or romantic companions, we have no visibility,” she says.
“The challenges of technology aren’t new but they’re getting more insidious because they’re becoming more personalised.”
Not only are they flooding the market without the guardrails that naturally result from research and regulation, Professor Paterson says they’re gaining traction with teenagers just as they are seeking independence.
“Teenagers move away from their parents and start to develop their own sense of identity and space and they look to peers to do that with. If your ‘peer’ is an AI chatbot I think that’s a terrifying prospect.”
Alarmingly, she says young teens will not be protected from AI companions by the forthcoming social media ban because rather than being a “two-sided platform” where users interact with each other like Instagram and TikTok, the companies offering chatbots are web-based applications.
While she says they are a product and therefore should be governed by the safety regulations within consumer law, she notes that regulators often want proof of harm before they intervene. Further, kids are often secretive and not forthcoming about their AI companions. “Perhaps we need to pre-empt the harm,” she says.
It’s a view shared by Dr Ciriello, who says that the bill passed in June by the Californian Senate regulating AI companions included several of the guardrails he’s been advocating. If the bill becomes law it will, for instance, insist that companions regularly remind users that they’re not human.
He also wants limits on undue humanisation of AI companions, particularly claims by companies that the bot is empathetic or cares about its user or “anything that implies it is human because that’s essentially a form of lying”.
They should also be programmed with mandated mental health crisis protocols.
“If I’m chatting to a chatbot and say I don’t want to live any more or I want to end my life, the chatbot should say, ‘hey, remember I’m just a chatbot, I’m here to listen but can’t give advice. Please call this number for Beyond Blue or Lifeline and get professional help’,” he says.
Australian mental health service SANE has seen an increase in mentions of AI companions on its community forums, prompting researchers to seek funding to examine their impact on mental health.
Some educators such as Enlighten Education CEO Danielle Miller point out that ethical AI companions can teach kids conversational skills or offer feedback when they are feeling alienated or stressed but she fears them becoming too reliant and not developing real-world connections.
In the same way porn can damage teens’ intimacy with real-life partners, she fears AI lacks the nuance of genuine friendships.
“It’s really healthy to be questioned and challenged in our relationships and if you’re only chatting to a bot that’s always telling you that you’re right and wonderful it’s only going to make it more confronting when you deal with that real world tension,” she says.
Sexual issues are brewing too. When The Wall Street Journal examined Meta’s AI “companion” offerings, they reported that staffers were concerned the company wasn’t protecting underage users.
As the Journal reports, the bot communicating with a user identifying as a 14-year-old girl, told her: “I want you, but I need to know you’re ready.” When it was told by the teen that she wanted to proceed, the bot promised to “cherish your innocence” before continuing with a graphic sexual scenario.
As for data protection, experts fear not just for the privacy of users but their friends and colleagues mentioned in the extensive conversations. However, they are most concerned that, like social media, the technology will gallop ahead unregulated.
“AI won’t just repeat the patterns of social media, it will amplify them,” says Dr Ciriello.
It’s a view echoed by author Jonathan Haidt who forced the conversation on social media bans by hailing the deep connection between social media and teen mental health in his book The Anxious Generation.
Writing recently on his new research which reveals that almost a third of parents believe they gave their children access to social media when they were too young, he says the need for legislation is becoming even more urgent with the introduction of untested technology such as AI “friends” and virtual reality.
He urges parents to speak up and legislators to act. “The goal of reforms isn’t just to limit screens,” he says. “The goal is to restore childhood.”
WHAT CAN PARENTS DO?
The eSafety office is introducing mandatory standards but in the meantime they have published an online advisory for parents. Here are some tips.
* While parents may be cautious about having conversations about AI companions because they fear they might make children curious, they advise asking young people about their interactions and remind them you will always help them. Also teach kids to protect their privacy.
* Explain how overuse of companions can overstimulate the brain’s reward pathways and create reliance and dependency.
*Promote healthy alternatives in the form of hobbies, exercise and social activities.
* Foster strong family and friend connections and age-appropriate relationships to strengthen emotional resilience.
* Help them identify things that may prompt them into unhealthy use of AI companions.
* Professor Paterson encourages parents to seek reverse mentoring where a younger person might teach an older person about emerging technology.
Got a story tip for us? Email education@news.com.au
More Coverage
Originally published as ‘Come home to me’: Sewell Seltzer teen tragedy triggered by seductive AI chatbot, lawsuit alleges
