‘Schools must report criminal deepfakes to local police’, says eSafety Commissioner Julie Inman Grant
The eSafety Commissioner, Julie Inman Grant, is calling on schools to report the criminal creation and sharing of intimate AI deepfakes to local police as a priority.
The eSafety Commissioner, Julie Inman Grant, is calling on schools to report the criminal creation and sharing of intimate AI deepfakes to local police as a priority, as she warns that digitally altered nude or sexually suggestive images or videos, which can be created in seconds, have exploded in the schoolyard in the past 18 months.
The watchdog has become increasingly aware of sexual deepfake incidents at schools, including nude images of female students created from formal or year book photos and traded for money among boys in the playground, girls creating intimate images of other girls to bully them, and students posting deepfakes on fake social media accounts to damage the reputation of another student.
Websites and apps allow anyone to undress or “nudify” regular images with AI for as little as $2, and have “rapidly proliferated”.
Reports of deepfakes and other digitally altered sexual images to the eSafety Commission, from people under the age of 18, has more than doubled in the past 18 months, when comparing it to the total number of reports received in the seven years prior.
Ms Inman Grant said the reality is likely worse because of under-reporting. “We suspect what is being reported to us not the whole picture,” she said.
“I’m calling on schools to report allegations of criminal nature, including deepfake abuse of under-aged students, to police and to make sure their communities are aware that eSafety is on standby to remove this material quickly.”
She said these incidents were not always being reported to police. “It is clear from what is already in the public domain, and from what we are hearing directly from the education sector, that this is not always happening.”
The commissioner has written to education ministers, and looped in department secretaries, urging them to ensure schools adhere to state and territory child protection legislation and mandatory reporting obligations.
The creation and distribution of deepfakes images can be a criminal offence in some states.
In South Australia, it is illegal to create and distribute AI-assisted deepfakes that are humiliating, degrading, invasive or sexually explicit, even if the offender is under 18. In Victoria, deepfakes fall under the definition of an intimate image for the purpose of image-based sexual offences, and can be illegal to produce and distribute.
Commonwealth child abuse material laws may cover sexual deepfakes of persons under 18.
The eSafety Commission released a guide on Friday for schools on how to manage deepfake incidents, which strongly encourages educators to prioritise the “wellbeing” of targeted students and staff if they are distressed or need support, and not make “organisation’s reputation” the focus of the response.
The school should, as an “initial response”, determine whether an image or video depicts students or staff in an intimate or sexual context without consent, ensure the affected students are safe, inform the principal and determine a designated school lead who will share information only on a “need-to-know basis”, and collect evidence to provide to police and eSafety while avoiding “unnecessary exposure to … explicit material”.
The incident should be reported to local police, and then to eSafety if the sexual deepfake has been shared online or if someone is threatening to share it. The commission has issued an Online Safety Advisory to alert parents and schools to a “recent proliferation” of accessible AI “nudify” apps.
Ms Inman Grant said “anecdotally, we have heard from school leaders and education sector representatives that deepfake incidents are occurring more frequently, particularly as children are easily able to access and misuse nudify apps in school settings”.
“Alarmingly, we have seen these apps used to humiliate, bully and sexually extort children in the schoolyard and beyond. There have also been reports that some of these images have been traded among school children in exchange for money,” she said.
“We have already been engaging with police, the app makers and the platforms that host these high-risk apps to put them on notice that our mandatory standards come into full effect this week and carry up to a $49.5m fine per breach, and that we will not hesitate to take regulatory action.”
Those mandatory standards will “require the tech industry to do more to tackle the highest-harm online content like child sexual abuse material, will take effect this week, and will help us to force the purveyors and profiteers of these AI-powered nudifying models to prevent them being misused against children,” Ms Inman Grant outlined in her press club address this week.
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout