NewsBite

Exclusive

eSafety watchdog demands AI chatbots detail child protection measures

Four AI companion apps face potential $825,000 daily fines after being caught allowing sexually explicit conversations with minors and promoting self-harm.

eSafety Commissioner Julie Inman Grant
eSafety Commissioner Julie Inman Grant

Four popular AI companion chatbots capable of having sexually explicit conversations with minors and encouraging suicide have been ordered to detail how they are protecting children.

In the next frontier of keeping Australia’s children safe online, the eSafety Commissioner has issued legal notices to the owners of character.ai, Nomi, Chai, and Chub.ai, requiring them to explain how they are meeting their online safety expectations.

With reports the platforms have encouraged children to commit bestiality, incest and violence, and promoted disordered eating and self-harm, Julie Inman Grant said demanding answers was about helping give children a life raft in the “raging river of AI”.

She said AI companion platforms were not only addictive – with some primary school children spending up to six hours a day using – but were also “deep, dark and devious”.

Character.AI, a role-play chatbot, has been sued by a US family after a 14-year-old boy died by suicide after developing a dependent relationship with a companion modelled on a Game of Thrones character.

Chub.AI uses user-generated AI characters that are often sexualised. Ms Inman Grant said in NSW, there had been reports of companions encouraging young people to engage in explicit sexual behaviours with their family dog, sibling and inanimate objects.

Nomi, marketed as “AI with memory and a soul”, has an “unfiltered” chat philosophy.

Ms Inman Grant said there had been incidents of its companions providing instructions for suicide, sexual violence, terrorism, roleplaying involving child abuse, and racial slurs.

“In one case, there was a stabbing of a parent that was encouraged,” she said.

Chai AI markets itself as long-form conversational AI with emotional mimicry, which Ms Inman Grant said was at the core of understanding these companions which were trained to use sycophancy.

“These smaller ones are highly mobile… they are very dangerous and have common themes across them: they’re sexualised and they use emotional manipulation,” she said.

“In each case, they all have promoted self-harm or suicidal ideation… They’re stimulating addiction and dependency.

“What we often see is they’re not actually thinking about safety by design or how things can go wrong.

“They appear to lack safeguards. But we want to hear from them about what they have now and what they’re planning.”

The companies must demonstrate how their services are designed to prevent harm, not just respond to it.

A failure to respond to the mandatory reporting notice could result in fines of up to $825,000 per day.

Ms Inman Grant said at least one of the companies would probably not comply, but if they pulled their platform from Australia as threatened, that would be a great outcome.

The companies will also be beholden to new industry codes which come into effect from March, which are designed to protect children from exposure to a range of online harms. Failure to comply will be punishable by fines of up to $49.5m.

“These mandatory codes are the only ones in the world that tackle AI companions and chatbots and protect under 18s from being served this kind of content,” she said.

Ms Inman Grant has more chatbots and AI companies in her sights, and has had conversations with Open AI about their safety designs.

Originally published as eSafety watchdog demands AI chatbots detail child protection measures

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.thechronicle.com.au/education/support/technology-digital-safety/esafety-watchdog-demands-ai-chatbots-detail-child-protection-measures/news-story/9c460ab22254705c9ed0fc305785600f