Boy killed himself to be ‘free’ with the chatbot he loved, underscoring tech’s dangers
The tragic death of a 14-year-old boy who killed himself so he could be ‘free’ with the chatbot he loved has reignited calls for stronger regulations on AI after Donald Trump vowed to unleash the technology.
The death of 14-year-old Sewell Setzer III, who killed himself so he could be ‘‘free’’ with the chatbot he loved, has reignited calls for stronger regulations on artificial intelligence after US president-elect Donald Trump vowed to unleash the technology.
Mr Trump has said he plans to overturn Joe Biden’s executive order on AI, which seeks to mitigate the technology’s “substantial risks” to security, the economy and society by ensuring its deployed “safely and responsibly”.
“We will repeal Joe Biden’s dangerous executive order that hinders AI Innovation, and imposes radical left-wing ideas on the development of this technology. In its place, Republicans support AI development rooted in free speech and human flourishing,” Mr Trump said on the campaign trail late last year.
Tech experts warn that Mr Trump’s deregulation agenda on AI could create a new “wild west” as companies race to seize competitive advantages without guardrails.
Before the election, executives at some US tech companies were privately saying they wanted legislated regulation of AI so they knew where they stood on the technology’s development.
The tragic case of Sewell Setzer shows how dangerous AI can be. The teenager from Orlando, Florida, had become smitten with Dany, a chatbot created by Character.AI and named after Daenerys Targaryen from the hit series Game of Thrones.
His mother Megan Garcia is now suing Character.AI, which has more than 20 million users and markets itself as developing “personalised AI that can be helpful for any moment of your day”.
It aims to put its technology in the hands of “billions of people”.
Sewell was reportedly bullied at school and preferred talking about his problems with Dany. He spent more and more time chatting with the AI bot, and while his mother was concerned about his withdrawal, she thought he was just playing a game.
Earlier this year, he talked about killing himself with Daenerys to be “free”. According to transcripts, the bot told Sewell: “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.”
Sewell responded: “I smile. Then maybe we can die together and be free together.”
The bot then provided the answers that would be typically expected from a counsellor or psychologist. The conversation ended. The next time Sewell said he wanted to get closer to the bot, it appeared to forget about the previous discussion on suicide.
“What if I told you I could come home right now? Please do, my sweet king,” the chatbot told Sewell. On February 28, Sewell used his stepfather’s gun to kill himself.
It isn’t the only time it’s happened. A young Belgian father, who was anxious about climate change, killed himself after speaking to an AI chatbot on an app called Chai, which reportedly encouraged him to end his life to help save the planet.
Jaswant Singh Chail, 21, was convicted of treason last year after British police foiled his attempt to kill the Queen after they arrested him on Christmas Day, 2021 at Windsor Castle, where he was armed with a crossbow. His trial heard his chatbot “girlfriend”, created through the Replika app, urged him to kill the late Queen.
Sewell’s mother Megan Garcia is now suing Character.AI for “deceptive and unfair trade practices”. “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” she said.
“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”
On Elon Musk’s social media platform X, Character.AI said: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”
“As a company, we take the safety of our users very seriously and we are continuing to add new safety features.” In then added a website link about its safety updates from its “character team”.
Shane Hill, principal research analyst at Australian tech advisory firm ADAPT, said Mr Trump’s vow to axe red tape could hinder the “safe adoption” of AI and have implications for the Australian federal election next year.
“Taking the axe to red tape around this nascent technology could turn the AI sector, already in a period of rapid-expansion, into the “wild west” as companies race for competitive advantages without the constraints of current guardrails,” Mr Hill said last week.
“If this comes to pass, the Australian government will need to provide an even stronger policy response in order to protect the local economy from potential harm. Also worth noting is our upcoming federal election, and how different Prime Ministers might approach local tech regulation in response to the Trump presidency.”
Lifeline 13 11 14; beyondblue.org.au; Kids Helpline 1800 55 1800
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout