Let police use artificial intelligence amid ‘tsunami’ of online child sex abuse
Police must be able to use artificial intelligence to help solve soaring rates of online child sex abuse.
Authorities will continue fighting a “tsunami” of online child abuse with their hands tied behind their backs if they can’t use tools such as artificial intelligence, warning the voices of privacy advocates are stifling those of victims and survivors.
Jon Rouse, a former Queensland detective inspector who worked in child protection for nearly 30 years and spearheaded Australia’s first operation targeting internet child sex offenders, said AI technologies such as Clearview AI – a platform banned in Australia – should be used. “If we are going to do the worst job, we need access to the best tools, simple as that,” he said.
“We need access to the best tools because we are fighting with our hands behind our back. (Offenders) have end-to-end encryption, they have anonymised platforms, they have obfuscation. We are fighting with a very small force for the rights of children globally.”
Professor Rouse said that before he retired, he worked to get victim-identification professionals from around the world to the US – where Clearview can be used – to identify cold-case victims of abuse. Dubbed Operation Renewed Hope, the mission successfully identified 311 probable victims of child sex abuse this year.
The operation “worked ethically. It worked appropriately with oversight. Right now the privacy we’re protecting is the child sex offender,” he said.
AFP Assistant Commissioner Hilda Sirec said the use of AI would fast-track the work of law enforcement officers and minimise their exposure to child abuse images. “One hundred per cent law enforcement has to adopt technologies to help identify victims (and) search for offenders to make sure they stop victimising (children),” she said.
“There’s a lot of things that the technology out there can do. (It can do) the same amount of identification and analytics that thousands of humans would take thousands of lifetimes to achieve.
“We’ve just got to get the processes done in the right way and adopt these technologies.”
Both said the top priority was the protection of children.
AFP Deputy Commissioner Lesa Gale added: “Given that the volume of child exploitation material coming into our country is exploding – I would almost go so far as to say it is like a tsunami – anything we can do to identify those victims and to reduce exposure for our officers is important,” she said.
It comes as police, officials from the Department of Foreign Affairs and the Attorney-General's Department, and survivors including Grace Tame gathered for the launch of the AiLECS Lab, a partnership between the AFP and Monash University.
AiLECS aims to advance ethical and transparent datasets for law enforcement agencies. The lab has so far collected about 800 images with consent to build the world’s first ethical image bank to combat child exploitation.
By comparison, Clearview AI has scraped 30 billion facial images from public websites including social media.
The Australian Centre to Counter Child Exploitation received more than 36,000 reports of abuse in 2021-22. In the same period, the AFP charged 221 people with 1746 related offences.
In 2021, Australia’s privacy watchdog effectively made the use of Clearview illegal. An investigation found it collected facial images without consent for a trial conducted by some Australian police forces between October 2019 and March 2020.
If you need help, visit accce.gov.au.