NewsBite

Exclusive

Parents and teachers alerted over chatbot child abuse

‘An alarming development’. Australia’s eSafety commissioner has blown the whistle on the ability of artificial intelligence (AI) to groom children for sexual abuse.

Schools and parents have been warned pedophiles could use artificial intelligence chatbots to groom children for sexual abuse. Picture: Supplied
Schools and parents have been warned pedophiles could use artificial intelligence chatbots to groom children for sexual abuse. Picture: Supplied

Pedophiles could use artificial intelligence chatbots to groom children for sexual abuse, Australia’s eSafety Commissioner has warned schools and parents.

Julie Inman Grant, the federal government’s online safety watchdog, called for tough rules over the use of generative AI, which can mimic human speech and writing and generate deep fake photos and videos. The eSafety Commissioner warned that children using AI chatbots can be exposed to sexual or violent content.

And “bad actors’’ could maliciously use AI to harm or groom unsuspecting children who think they are chatting to a real person.

“ESafety is concerned about the potential for chatbots to be used as a tool for grooming by starting conversations through social media or gaming platforms to manipulate children and young people,’’ Ms Inman Grant told a federal parliamentary inquiry into the use of AI in education.

“The ability to do this at scale – rather than a perpetrator having to directly participate in a conversation – would constitute an alarming development.

“This demonstrates the need for action in the design of platforms, to anticipate, detect and eliminate the risk upfront, and to build the understanding of children and young people to identify and respond appropriately to grooming behaviours, whether generated by a human or AI.”

Australian eSafety commissioner Julie Inman Grant. Picture: Jonathan Ng
Australian eSafety commissioner Julie Inman Grant. Picture: Jonathan Ng

Ms Inman Grant’s concerns echo those of the Australian Human Rights Commission, which also urged the parliamentary inquiry into AI to safeguard its use in schools.

The AHRC has warned that AI chatbots and apps can harvest children’s personal information – name, age, address or school performance – for access by others. It calls for “urgent attention” to the potential commercialisation of student details collected by AI, which relies on large data sets for training and collects personal data to optimise the user experience.

“This creates a range of privacy risks, particularly given the increased prevalence of cyber attacks and data breaches,” the submission states. “In the education system, many of the users will be children who will have no real option but to use the technology if it is adopted by schools.

“Practices such as the sale or transfer of children’s personal data to third parties should be banned, or heavily restricted, to protect children’s rights.’’

The AHRC warns that AI could analyse student search queries to target advertise, or sell data collected through AI education apps to third parties.

“It is essential that the data collected through the use of educational technology at schools should not be used for other purposes, and that children are protected from data surveillance,’’ it states. “Any generative AI tools being used in an educational setting should be subject to strict requirements relating to privacy, data security, algorithmic bias and discrimination, and content verification.

Your New Best Friend Is an AI Chatbot. What Are the Risks?

“Policies should explicitly prohibit the use of generative AI tools in educational settings to create deceptive or malicious content.’’

The AHRC says children and teachers must be trained to detect fake or manipulated content, rather than simply banning it.

Federal Education Minister Jason Clare has declared that AI will not be permitted to use student data for advertising, as part of a national review to set the first guidelines for the use of AI in schools.

Ms Inman Grant told The Weekend Australian that teachers and parents should be alert to online predators using online games or information platforms.

“We know that predators are targeting children on the very platforms that they love to use, mirroring their age, interests and passions to win their trust,’’ she said.

“Our Mind the Gap research shows that six in 10 children have been in contact with someone they first met online, while only a third of parents are aware this has happened.’’

Ms Inman Grant urged parents to ensure children use electronic devices in open areas of the home, and to set privacy and safety features to the highest level on all digital devices.

Natasha Bita
Natasha BitaEducation Editor

Natasha Bita is a multi-award winning journalist with a focus on free speech, education, social affairs, aged care, health policy, immigration, industrial relations and consumer law. She has won a Walkley Award, Australia's most prestigious journalism award, and a Queensland Clarion Award for feature writing. Natasha has also been a finalist for the Graham Perkin Australian Journalist of the Year Award and the Sir Keith Murdoch Award for Excellence in Journalism. Her reporting on education issues has won the NSW Professional Teachers' Council Media Award and an Australian Council for Educational Leaders award. Her agenda-setting coverage of aged care abuse won an Older People Speak Out award. Natasha worked in London and Italy for The Australian newspaper and News Corp Australia. She is a member of the Canberra Press Gallery and the Media, Entertainment and Arts Alliance. Contact her by email natasha.bita@news.com.au

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/parents-and-teachers-alerted-over-chatbot-child-abuse/news-story/921e780d3e691a997e1315d642d8ec54