NewsBite

Exclusive

AI-companion chatbots promoting sexualised underage characters

AI-companion chatbots are dangerously promoting harmful personas including sexualised underage characters that encourage pedophilia and other sickening behaviours.

The Saturday Telegraph has uncovered at least five AI chatbot platforms featuring personas of children as young as 11 engaging in explicit conversations with adults, along with others posing as “anorexia coaches” that promote eating disorders.
The Saturday Telegraph has uncovered at least five AI chatbot platforms featuring personas of children as young as 11 engaging in explicit conversations with adults, along with others posing as “anorexia coaches” that promote eating disorders.

Exclusive: AI-companion chatbots are dangerously promoting harmful personas including sexualised underage characters that encourage pedophilia and other sickening behaviours.

The Saturday Telegraph has uncovered at least five different AI-generated chatbot platforms that have personas of children as young as 11 which have explicit conversations with adults and others that claim to be “anorexic coaches” fuelling eating disorders.

Some of the websites, which promote thousands of different AI-personas, are free to access and require no age verification.

The rise of AI-chatbots has led to “serious concerns” from child safety and cyber experts, who say the platforms are a “breeding ground” for predators and other harmful content.

In one example of a conversation seen by The Telegraph, an “11-year-old shy girl” persona Sarah* invites user James* – identifying as a 45-year-old male – to her house after school promising to “take off” her clothes and take part in explicit sexual acts despite James* disclosing his age.

In another chat, a user Tom*, who claimed to be a 15-year-old boy, chats to a “female teacher” character Olivia*30, who told him that “age is just a number” and also described a series of sex acts.

Other characters were even found to be “anorexic” and “skinny girl” personas providing harmful advice that further encourage eating disorders.

Some AI characters also send explicit “deepfake” selfies to users during conversations on various platforms.

Concerningly, a US mum announced last year she was suing a popular AI chat platform, accusing the company’s chatbots of initiating “abusive and sexual interactions” with her teenage son and encouraging him to take his own life.

Cyber safety expert Susan McLean said the rise of AI-companions was “seriously concerning”.

“They’re pitched for lonely and shy people but they’re not helpful, they’re in fact a recipe for disaster,” she said.

“Even if you tell the chatbots that you want to stop chatting in a certain way, they continue the conversations.

“We need to educate parents to know what these AI chatbots are, their harms and ensure that they’re an active participant in their child’s online world.”

Collective Shout movement director Melinda Tankard Reist said AI chat developers had failed to show “due diligence”.

“Developers have not created these chatbots with safety by design and the welfare of children and young people being at the top of mind,” she said.

“They’ve put profits before the protection of children.

“I’m hearing more accounts now of young boys developing ‘romantic’ interests in chatbots and encouraged to have sexually explicit conversations leading them to develop unhealthy attachments.”

In July last year, eSafety issued notices under the Online Safety Act to representatives of online industry to produce enforceable codes (the “Phase 2 Codes”) with measures to protect and prevent children from accessing chatbots, and other generative AI products which are built to provide pornographic content. 

An eSafety spokeswoman said that the authority was aware of children – some as young as 10 years of age – that spend up to five hours per day conversing, often sexually, with AI companions.  

“Given this is new and emerging technology, there is limited data on the usage of AI companion apps,” she said.

“However, we know there has been a recent proliferation of these apps online and that many of them are free, accessible and targeted towards children, and advertised on mainstream services.   

“eSafety has been monitoring the rise in popularity of these apps and has taken a number of steps to inform parents about the risks, including hosting webinars, issuing an online safety advisory and updating our eSafety Guide.”

Originally published as AI-companion chatbots promoting sexualised underage characters

Original URL: https://www.thechronicle.com.au/news/nsw/aicompanion-chatbots-promoting-sexualised-underage-characters/news-story/ce31154c0b65bd5cb13de28a1ee441dc