NewsBite

Exclusive

AI warning: Fears over child abuse material generated from social media photos

Experts fear AI could be used to generate child sexual abuse material based on photos scraped from social media. Find out how to protect your kids.

Exclusive: AI-generated child abuse material is being posted on social media platforms, sparking serious concerns it will hinder police from tracking down real victims.

The eSafety Commission, Australia’s online safety watchdog, received its first reports of AI-generated child sexual abuse material last month.

eSafety Commissioner Julie Inman Grant said investigators could tell the AI images were not real, but soon it will be too hard for the human eye to distinguish.

“What makes generative AI so powerful, and possibly treacherous, is the generative component: the fact that new content can be created using a range of source data and materials, from cartoons and anime to photos of children,” Ms Inman Grant said.

Australian eSafety commissioner Julie Inman Grant. Picture: Jonathan Ng
Australian eSafety commissioner Julie Inman Grant. Picture: Jonathan Ng

She said there is also the potential for predators to generate sexual images using photos of children harvested from social media, as well as using existing abusive material to create new poses and sexual acts, meaning children could be re-victimised again and again.

Within minutes, we were also able to find sexualised AI images of young girls on social media platforms.

Ms Inman Grant said all child sexual abuse material – whether it is real, fake or AI – is harmful because it normalises the sexualisation of children.

Online pedophile hunter Lyn Swanson Kennedy from campaign group Collective Shout said she has reported AI-generated child sexual abuse images on social media platforms, such as Twitter.

“The images look very realistic, and attract suggestive and predatory comments from men,” Ms Swanson Kennedy said.

Lyn Swanson Kennedy, of Perth, who tracks down pedophiles online for campaign group Collective Shout.
Lyn Swanson Kennedy, of Perth, who tracks down pedophiles online for campaign group Collective Shout.

Former police officer Jon Rouse, who has helped infiltrate and smash international pedophile networks, said it is time Australia adopted a powerful facial recognition tool that can quickly identify “real” child abuse victims online – but which is banned here due to privacy concerns.

The Australian Federal Police secretly began using the controversial technology Clearview AI in 2019, uploading images of child abuse victims, but an investigation found it failed to comply with its own privacy obligations. It no longer uses it.

Despite privacy concerns, Mr Rouse said police officers were “screaming out for this”.

“Imagine if we never used DNA technology to solve cases,” Mr Rouse said. “This technology is a similar kind of revolution in law enforcement.”

Queensland, 2019 Australian of the Year, former Detective Inspector and children’s champion Jon Rouse. Picture Gary Ramage Picture: Gary Ramage
Queensland, 2019 Australian of the Year, former Detective Inspector and children’s champion Jon Rouse. Picture Gary Ramage Picture: Gary Ramage

Clearview AI – created by an Australian living overseas – has scraped billions of people’s photos from social media accounts.

The company then sells the content.

The software has been used by Ukraine to identify Russian soldiers who have been killed or captured, check people at checkpoints and search for missing people.

However, it is banned in many countries due to privacy concerns and fears the data could end up in the wrong hands.

Mr Rouse said US law enforcement successfully uses it to identify child victims.

He said while offenders are embracing AI, Australian child protection detectives are effectively working with one arm tied behind their backs.

“The debate over protecting privacy and whether we should be allowed to scrape data is irrelevant to me,” Mr Rouse said.

“Anyone who has seen and heard the things I have in child protection doesn’t care about that.”

Mr Rouse, who retired from the police force in May, now works in an advisory role at Monash University looking at how best to harness responsible AI systems to catch predators.

An AFP spokeswoman said it is illegal to produce any sexualised image of an under 18 whether it is a real person or not.

To report harmful content go to eSafety.gov.au/report.

WHAT PARENTS NEED TO KNOW ABOUT AI CHILD ABUSE IMAGES

What is AI-generated child abuse material?

Artificial Intelligence (AI) technology can create images of people from scratch or transform existing images into new ones. At the moment it is just about possible to distinguish AI generated faces from real faces. But soon it will be too hard for the human eye to detect. There are AI-generated sexualised images of minors already on social media platforms.

What material does AI use to create these images?

AI generators can use cartoons, anime and real photos that are already on the internet to create new images. In some cases existing child abuse material could be fed into AI to produce new poses and sexual acts of real victims.

Why should parents be worried about AI images?

Predators could use photos from your social media pages or the internet to create sexualised images with your child’s face.

How can parents protect their children?

Seriously consider whether it is worth putting up a photo of your child on the internet.

If you do, adjust your privacy settings and make sure that you only share content with people you know and trust. Avoid sharing any photos or videos with personal or private information – particularly school uniforms that can identify location. Always check with other parents before posting, sharing or tagging images that include their children. 

Originally published as AI warning: Fears over child abuse material generated from social media photos

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.adelaidenow.com.au/truecrimeaustralia/ai-warning-fears-over-child-abuse-material-generated-from-social-media-photos/news-story/85b9e4eae0aa1e3888f6791ef8562010