Up to quarter of AI image generation resources are porn, amid school deepfakes warning
A leading online AI image generation forum has revealed that up to a quarter of its content is ‘mature’ or sexually explicit in nature, amid heightened warnings of AI deepfakes in Australian schools.
A leading online AI image generation resource forum says up to a quarter of its content is porn or sexually explicit in nature, as the nation’s eSafety Commissioner warns of the proliferation and development of deepfake AI technology.
Since the public release of ChatGPT in November last year, there has been a flurry of generative AI tools to produce images and videos. The most common tools take plain English prompts like “black cat, jungle, cartoon style” and return an image. Perhaps unsurprisingly, they are increasingly being used to make deepfake pornography.
CivitAI, a leading AI image resource repository, users have uploaded AI-trained models of individual people that others can then download and use to make images and videos. This can be mixed with other AI-trained models of people in different positions, attire, poses, etc.
At the time of publication, there were thousands of models of real people – overwhelmingly women – with dozens of Australians, ranging from international stars like Margot Robbie to Instagram personalities with fewer than 7000 followers.
While deepfakes are not new, the ability to create convincing images on software like Photoshop was a high-skill, time-consuming process.
Now people with no programming or graphic design knowledge can generate near-photorealistic images of specific people in specific situations with a click of a button.
A CivitAI spokesperson said “less than 25 per cent of resources and images uploaded to CivitAI are mature in nature, and for the most part our users do an excellent job of policing themselves.”
Alanah Pearce, an Australian video game writer based in the US, has had multiple models of her likeness uploaded to CivitAI which look to have been downloaded hundreds of times. She specifically drew attention to the potential of the technology to be used in schoolyard bullying.
“People have been making deepfakes using my face or voice around 10 years at this point,” she told The Australian.
“Where pornographic deepfakes are concerned, they’re increasing in quality and as they improve, there’s little people like me can do other than say ‘It’s not me and it never will be’. You can’t really disprove them.
“As the tech proliferates … I worry about when it starts happening to school kids because (it) has become accessible enough for a schoolyard bully to use it.”
In a submission to the House of Representatives inquiry into generative AI in education, the eSafety Commissioner said “there is a risk generative AI technology may be used to create more sophisticated, realistic images … as the technology becomes more widespread.”