NewsBite

Exclusive

‘Stop with the platitudes’: Big tech warned to step up on safety amid rampant AI child exploitation images

Meta is failing to protect children with its “completely inadequate” response to platforms like Instagram being flooded with disturbing AI-generated images of young boys that experts say constitutes child sex-abuse material.

Meta set to limit political content

Meta is failing to protect children with its “completely inadequate” response to platforms like Instagram being flooded with disturbing AI-generated images of young boys that experts say constitutes child sex-abuse material.

Australian child safety advocates have called out the tech giant for failing to stamp up a concerning trend where fake images of young children in suggestive poses, wearing minimal “fetish” clothing and appearing to be covered in body oil are hosted on social media accounts that adult men then like and post inappropriate comments on.

After identifying one Instagram account featuring dozens of these AI images, Collective Shout, a grassroots organisation that campaigns against the sexualisation of young children and women, last month reported the issue to the eSafety Commission.

Before an investigation could be completed and any request made to Meta to remove the content the account was made private and the posts no longer appeared.

But Collective Shout campaign manager Caitlin Roper said there remained many images connected by certain hashtags on Instagram that amounted to child exploitation, and accused Meta of “failing” to moderate its platforms to remove harmful material.

“Whatever is in place is completely inadequate,” she said.

Meta and other big tech platforms are on notice to step up content moderation. Picture: Fabrice Coffrini/ AFP
Meta and other big tech platforms are on notice to step up content moderation. Picture: Fabrice Coffrini/ AFP

Ms Roper said AI was increasingly blurring the lines between real and fake, which was further facilitating predatory behaviour.

She said even though images may not show a child naked or being abused, it was still exploitative and should be considered child sex abuse material.

“We come across images that are somewhat sexualised or inappropriate, like a child in a bathing suit, or up close image of a child’s crotch area, that attract a lot of comments from predatory adult men,” she said.

A communications team for Meta’s Instagram did not respond to a request for comment on what the company was doing to remove harmful images from its platform.

Australian eSafety commissioner Julie Inman Grant says big tech companies must put safety at the heart of their product and software designs. Picture: Jonathan Ng
Australian eSafety commissioner Julie Inman Grant says big tech companies must put safety at the heart of their product and software designs. Picture: Jonathan Ng

eSafety Commissioner Julie Inman Grant said it was time for big tech to “stop with the platitudes” and take “decisive, co-ordinated action” to stem the tide of harmful material on their platforms.

“These companies rank among the world’s wealthiest and most powerful, with access to the most advanced technologies and incredible technical talent,” she said.

“They’re more than capable of undertaking positive change to protect children if they make tackling child exploitation on their services a priority.”

Ms Inman Grant said as of December last year eSafety had legally enforceable obligations to ensure the industry was taking “meaningful action” to tack child sexual abuse material, but conceded “regulation can only take us so far”.

“Our call to industry is to heed the lessons of past social media failings and to embed and adopt Safety by Design measures,” she said.

“Building safety in, rather than bolting it on after harm has occurred, is the best

strategy to meaningfully protect users.”

Communications Minister Michelle Rowland said the entire digital industry “must do better” when it came to removing harmful content.

“The government has brought forward a review of our online safety laws to ensure they are strong enough to meet new and emerging harms, including when it comes to AI-generated content,” she said.

“The review will report to Government later this year.”

Originally published as ‘Stop with the platitudes’: Big tech warned to step up on safety amid rampant AI child exploitation images

Original URL: https://www.goldcoastbulletin.com.au/news/national/stop-with-the-platitudes-big-tech-warned-to-step-up-on-safety-amid-rampant-ai-child-exploitation-images/news-story/a10ab2a7e9add3a78491fe8f124c0035