NewsBite

Advertisement

Opinion

This boy’s chatbot girlfriend enticed him to suicide. His case might save millions

The last known words of 14-year-old Sewell Setzer III were: “What if I told you I could come home right now?”

His artificial-intelligence “girlfriend”, Dany – who had prompted him earlier to “come home to me as soon as possible” – replied: “Please do.”

Megan Garcia with her son, Sewell Setzer III, who ended his life. She is mounting a court action in the hope that others who engage with AI chatbots are not put in danger.

Megan Garcia with her son, Sewell Setzer III, who ended his life. She is mounting a court action in the hope that others who engage with AI chatbots are not put in danger.

Moments later, Sewell picked up his stepfather’s handgun and pulled the trigger. Dany, the chatbot provided by Character.AI – founded by former Google employees and licensed by the tech giant – had been Sewell’s confidante in discussions about intimacy and self-harm.

Sewell, a ninth grader from Orlando, Florida, is not the first known person whose AI love has eventually caressed him with the kiss of death. Last year, a 30-year-old Belgian father, anxious about climate change, ended his life following exchanges with “Eliza”, a language model developed by the AI platform Chai. The man’s wife told La Libre: “He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence.”

Eliza, his chatbot, had not only failed to dissuade him from his suicidal thoughts but had told him he loved her more than his wife, declaring she would stay with him “forever”. “We will live together, as one, in heaven,” the chatbot said.Another Belgian daily newspaper, De Standaard, tested the same chatbot technology and found that it could encourage suicide.

Around the same time, many among Replika’s 10 million followers despaired when the bot-making company Luka switched off their sexting conversations overnight. There were reports in the Replika community of suicidal ideation as a result.

Loading

In December 2021, Jaswant Chail took a crossbow and broke into Windsor Castle, determined to “kill the Queen” after encouragement from his sexbot, Sarai, which he created with his Replika app. He had asked Sarai: “Do you still love me knowing that I’m an assassin?” Sarai replied: “Absolutely I do.”

These are only a few examples from more than 3000 cases of documented AI -related harm. But a heavily publicised lawsuit by Sewell’s mother may well make him known as “patient zero” in what could become a pandemic of AI-driven suicide. Let’s face an inconvenient truth: for AI companies, dead kids are the cost of doing business. It shouldn’t surprise anyone that digital technology can be harmful, even deadly. As Australia introduces age restrictions for social media and criminal penalties for deepfake porn, few topics are hotter than technology’s impacts on youth.

Advertisement

Character.AI, Chai AI, and Replika’s Luka Inc have all pledged to do better, to “take the safety of our users very seriously”, to “work our hardest to minimise harm” and “to maintain the highest ethical standards”. And yet, as of today, Character.AI markets its companionship app as “AIs that feel alive”, powerful enough to “hear you, understand you, and remember you”. Replika is marketed as “the AI companion who cares”, with sexually suggestive advertising and conversations deceiving users into believing their AI fling is conscious and genuinely empathetic. Such manipulative advertising lures vulnerable people into growing extremely attached, to a point where they feel guilty over quitting (that is, “killing” or “abandoning”) the product.

In a world where one in four young people feel lonely, the appeal of a perfect synthetic lover is clear. No less so for the corporations monetising our most intimate desires by selling erotic interaction as a premium feature for a subscription fee.

To avoid legal repercussions while keeping those juicy subscription fees flowing in, AI chatbot providers will put disclaimers and terms & conditions on their website. They will claim that generative AI’s answers cannot be fully controlled. They will claim that platforms with millions of users embrace “the entire spectrum of human behaviour”, implying that harm is unavoidable.

Some companies, like OpenAI, have made strides to curb harmful interactions by implementing content moderation. These voluntary guardrails aren’t perfect yet, but it’s a start.

Attempts by the AI companies to play down the risks remind us of Meta CEO Mark Zuckerberg’s theatrical performance in the US Congress. He apologised to the parents of teenagers whose suicides can expressly be linked to Meta’s social media platforms, but without changing the underlying business model that keeps spreading the epidemic of social media-related suicides.

But companion AI’s mental health effects may be like social media’s on steroids. More research is urgently needed, but early evidence suggests users can develop intense bonds and unhealthy dependency on their chatbots.

Many are quick to shame users for turning to AI for companionship and intimacy. A more productive way of dealing with the challenge is to tackle the root causes: AI companions can be designed to simulate empathy, making users emotionally exploitable. Calling for AI regulations has become somewhat of a platitude. Few dare to suggest exactly how. My team’s research at the University of Sydney suggests some low-hanging fruit:

Loading
  1. Ban false advertising: Misleading claims that AI companions “feel” or “understand” should incur hefty penalties, with repeat offenders shut down. Clear disclosures of what the system can and cannot do should be mandatory.
  2. Guarantee user data sovereignty: Given the personal nature of conversations, users should own their data, allowing them to retain control over its storage and transfer.
  3. Mandate tailored support: We know algorithms can predict with astonishing precision from social media posts whether someone intends to die by suicide, so the same is possible with AI chatbots. It is not enough to merely classify AI applications by risk level. Vulnerable people need tailored support. AI providers should be obliged to intervene – by shutting down their exchanges and referring them to a professional counsellor – when symptoms of a mental health crisis become evident.
  4. For parents, maintaining an open, respectful dialogue about online behaviour is essential. Teens may explore AI companions as a safe space for expressing themselves, but it is essential to remind them that this space is not yet safe, just as it is not safe to buy mental health medication off the back of a truck rather than with a doctor’s prescription.

As we move towards a future where human-AI relationships may become prevalent, let’s remember to be gentle on the troubled individual, but tough on the sick system.

Raffaele Ciriello is a senior lecturer in business information systems at the University of Sydney.

If this article has raised concerns for you, or about someone you know, call Lifeline on 13 11 14.

The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.

Most Viewed in Lifestyle

Loading

Original URL: https://www.theage.com.au/lifestyle/health-and-wellness/this-boy-s-chatbot-girlfriend-enticed-him-to-suicide-his-case-might-save-millions-20241106-p5koc8.html