Students using generative AI to bully with ‘sexually explicit’ content, deepfakes
The first instance of AI-generated sexually explicit content produced by students to bully others has already been reported but experts fear it’s just the tip of the iceberg.
Education
Don't miss out on the headlines from Education. Followed categories will be added to My News.
Generative artificial intelligence is being used to amplify cyber-bullying among schoolchildren, including using sexualised images, as students start using the technology to create harmful deepfake videos and audio, the eSafety Commissioner is warning.
The first instance of AI-generated sexually explicit content produced by students to bully other students was reported to the eSafety Commission in August – but Commissioner Julie Inman Grant said she feared it was just the tip of the iceberg.
Teachers, particularly female staff members, are also expected to become targets of abuse using the emerging technology.
She said the generative nature of the emerging technology meant that it no longer took huge amounts of computing power to create convincing deepfakes, an image or video of a person digitally altered to make them appear to be someone else, to be used as porn, child exploitation or cyber-bullying material.
The Education Ministers Meeting is meeting next week where they are expected to be presented with the consultation results of the Draft AI Framework for Schools, developed by the National AI Taskforce.
Ms Inman Grant said the first report of AI-generated cyber-bullying was on top of reports of AI-generated child sexual exploitation material and a small but growing number of “distressing and increasingly realistic” deepfake porn reports.
“We suspect the harms being unleashed are much more widespread and our reports are just the tip of the iceberg,” she said.
“It’s becoming harder and harder to tell the difference between what’s real and what’s fake online. And it’s much easier to inflict great harm.”
It is not limited to bullying among students, with Ms Inman Grant, in a submission to a parliamentary inquiry into generative AI in the education system, that teachers too are likely to become targets, though they were already exposed to online abuse.
“Female staff are at particular risk of sexualised abuse,” Ms Inman Grant said.
Parents and carers need to be mindful about how much they share about their children online, to encourage use of devices in open areas of the home, while they should also set rules as a family about which devices and apps can be used, when and for how long.
An Education Queensland spokeswoman said the department had not been made aware of any specific reports of cyber-bullying involving AI generated material.
“Queensland state schools are committed to ensuring that students are aware of the positive opportunities that technology and the internet can provide for their learning, but also the risks and negative outcomes that can occur from inappropriate behaviours online,” she said.
“Bullying using generative AI is equally as serious as other forms of bullying and will be dealt with as cyber-bullying, using existing processes and services.