AFP, ASIO reveal ‘diabolic’ risk artificial intelligence is posing to our kids
Extremists, scammers and child abusers alike are exploiting artificial intelligence for harm, making ‘diabolic’ imaginary scenarios a reality.
Online
Don't miss out on the headlines from Online. Followed categories will be added to My News.
Artificial intelligence has created a new minefield for intelligence and law enforcement agencies as they battle new frontiers of terrorism threats, child exploitation material and scams.
In a joint address to the National Press Club, the country’s top spy, ASIO Director-General Mike Burgess and Australian Federal Police Commissioner Reece Kershaw said they were increasingly concerned about how AI was being “weaponised”.
Commissioner Kershaw said he was particularly concerned about the increasing use of AI in creating “diabolic” child abuse material, citing examples of how technology was “nudifying children whose clothed images have been uploaded online for perfectly legitimate reasons”.
Meanwhile, Mr Burgess said ASIO had assessed artificial intelligence would “allow a step change in adversary capability”.
“We’re aware of an offshore extremist already asking a commercially available AI program for advice on building weapons and attack planning,” Mr Burgess said.
“And when the program refuses to provide the requested information, the extremists will try and has tried to bypass ethical hand brakes.
“As I mentioned at the outset, the internet is the most potential incubator of extremism. AI is likely to make radicalisation easier and faster.”
He said the agency also anticipated artificial intelligence “will increase the volume of espionage”.
Commissioner Kershaw said from a policing perspective, technology advancements posed real problems in keeping Australians, especially children, safe.
“If it used to take a village to raise a child, constant advances in technology now means it takes a country, global law enforcement and the private sector to help keep them safe,” he said.
Talking specifically about AI, he showed audience members artificially-generated pictures of children and asked them to think about a “diabolical scenario”.
“The AFP identified online child sexual abuse and immediately look if it is a new victim. If we believe that they are having their life at risk. If they determine it is a priority case,” he posited.
“But after weeks, months or maybe years, investigators determine that there is no child to save, because the perpetrator used AI to create an image to create the sexual abuse.
“It is an offence to create, possess or share this material, and it is a serious crime.
“But the reality for investigators is that they could have been using capability, resources and time to find real victims. Please reflect on that.”
He said one thing parents could do to keep their children safe was to lock down their settings on social media accounts to make it harder for others to access images and use AI to create abuse material.
He said the AFP was also talking to social media companies about how to help identify a “tsunami of AI generated abuse material we know is coming”, which could include safeguards on tools like enforcing digital watermarks.
“There is no silver bullet and offenders are always looking at how they can beat technological countermeasure,” he said.
When asked how concerned he was by the use of AI in revenge porn or other instances targeting adults, Commissioner Kershaw said it depended on how “innovative and ingenious” criminals were.
“I have no doubt that they will weaponise AI to their advantage. So we will see more victims, more than likely, and perhaps more silent victims, which we won’t know about,” he said.
“And that’s why we’re really encouraging the community to contact police … to report it.”
Originally published as AFP, ASIO reveal ‘diabolic’ risk artificial intelligence is posing to our kids