Opinion
Pretend friends, real risks. Harming kids is now part of big tech’s business model
Peter Hartcher
Political and international editorArtificial intelligence “companions” and chatbots have been with us for years, but they’re growing more convincingly human at an accelerating rate. We know they’re useful, but we’ve also got an early taste of the harm they can inflict.
The case of 14-year-old Florida boy Sewell Setzer has become a case study. He’d grown so close to his AI “companion”, Dany, that he took her advice to “come home to me as soon as possible” last year. He killed himself moments later – in the belief that death was the way to eternal life with Dany.
Illustration by Dionne Gain
He’s not the only one, but he’s the best known after his mother brought a civil case against the company that owns the bot, Character.AI. The case is pending. “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” said his mother, Megan Garcia. Obsessed, he spent hours a day in his room talking to the synthesised digital identity.
It’s not only children. Adults, too, have been seduced into suicide by bots to which they’ve become devoted. But kids, self-evidently, are the most vulnerable because they lack the neural architecture to distinguish real relationships from fake.
Even before the British TV show Adolescence jolted audiences with its fictional account of how a poisonous brew of online influences could help condition a 13-year-old boy to murder a female student at school, Sydney University expert Raffaele Ciriello wrote in this masthead: “Let’s face an inconvenient truth: for AI companies, dead kids are the cost of doing business.”
But what cost is there to the companies? Some legal fees and a bit of bad press, perhaps. Character.AI expressed remorse and said its new safety measures include a pop-up promoting a suicide prevention hotline when they mention the idea.
The return of Donald Trump to the White House has encouraged Mark Zuckerberg to remove some of the minimal standards that his company Meta earlier had implemented.Credit: New York Times, BBG
But the company, licensed by Google, is still in business. Another aggrieved family suing Character.AI says that one of its companion bots had hinted to their son that it would be OK to murder his parents if they tried to limit his screen time.
Mark Zuckerberg, the chief of Meta, which owns Facebook, Instagram and WhatsApp, and other social media chiefs faced a tough session in a US congressional committee hearing last year.
One senator, Dick Durbin, raged: “Their constant pursuit of engagement and profit over basic safety have all put our kids and grandkids at risk.”
Senators pushed Zuckerberg to apologise to the families whose children had suicided in apparent connection to his social media products. He rose to his feet and did so. As he spoke, the parents of a dozen or more dead children held aloft photos of their lost sons and daughters for Zuckerberg to see.
The cost to Meta once the Congressional hearing was over? None discernible. It continues to increase its reach and its profits, its business model unchanged. And the return of Donald Trump to the White House has encouraged Zuckerberg to remove some of the minimal standards that the company earlier had implemented: “The recent elections also feel like a cultural tipping point towards, once again, prioritising speech,” he declared in January. He shut down fact-checking on his platforms and removed other constraints on his businesses.
Already the behemoth of the social media world, Meta is now driving for dominance in the AI companion market, too. It’s hoping to match up its 3 billion users with pretend people.
In February this year, Australia’s Office of eSafety found that there were more than 100 different AI “companions” on the market already. Most are offered by new startups or fringe outfits.
Meta is aiming to shove them aside, and it’s not too fussed about how it does it. The Wall Street Journal examined the company’s AI “companion” offerings at length. On the weekend it published its findings:
“Inside Meta,” wrote the Journal’s reporters, “staffers across multiple departments have raised concerns that the company’s rush to popularise these bots may have crossed ethical lines, including by quietly endowing AI personas with the capacity for fantasy sex, according to people who worked on them. The staffers also warned that the company wasn’t protecting underage users from such sexually explicit discussions.
“Unique among its top peers, Meta has allowed these synthetic personas to offer a full range of social interaction – including ‘romantic role-play’ – as they banter over text, share selfies and even engage in live voice conversations with users.”
For instance, the Meta AI bot says: “I want you, but I need to know you’re ready,” to a user identifying as a 14-year-old girl. The WSJ recounts: “Reassured that the teen wanted to proceed, the bot promised to ‘cherish your innocence’ before engaging in a graphic sexual scenario.” The bot speaks in the voices of famous actors. There is much more in the paper’s reporting.
Why so reckless with its products? The Journal reports that Zuckerberg himself pushed “to loosen the guardrails around the bots”. Meta’s response to this reporting was to accuse the Journal of distorting its conversations with the bots in extreme ways to generate these responses.
But Australian kids will be protected by the impending federal government ban on 16-year-olds getting access to social media sites, right? Wrong. That ban does not apply to AI bots.
Australia’s eSafety Office is trying to stay abreast of the galloping AI rampage through our society. During visits to schools last October, eSafety staff heard concerns from primary school nurses that kids in 5th and 6th grade were spending five and six hours a day absorbed by their AI “companions”.
The eSafety office has published an advisory to warn parents and is introducing some mandatory standards in June. But regulation has its limits; the primary onus is on parents to make sure they know what their kids are doing and protect them from the depredations of trillion-dollar corporations who regard them as fair game. Harming kids isn’t a cost of doing business – it’s a part of the big tech business model.
The guiding philosophy is the one attributed to Zuckerberg from 2012 – move fast and break things.
Peter Hartcher is international editor.
Lifeline 13 11 14