NewsBite

exclusive

ChatGPT for criminals to turbo charge scams, say Australian Federal Police

Malicious artificial intelligence could lead to a cybercrime explosion, ushering in a new generation of ultra-sophisticated scams, federal police warn.

Australians are at risk from malicious AI that could lead to a new generation of scams, federal police warn. Picture: AAP
Australians are at risk from malicious AI that could lead to a new generation of scams, federal police warn. Picture: AAP

Malicious artificial intelligence could lead to a cybercrime explosion, ushering in a new generation of ultra-sophisticated scams, federal police warn.

An Australian Federal Police submission to a federal cybercrime inquiry flags AI models such as FraudGPT and WormGPT as a growing threat.

The models are similar to ChatGPT, but are devoid of ­restrictions on answering questions about illegal activity.

They provide a suite of tools that can craft spear-phishing emails “with perfect grammar and spelling”, aimed at stealing sensitive information such as login ­details, the AFP says.

Australian Federal Police Commissioner Reece Kershaw in his office in Canberra. Picture: Sean Davey
Australian Federal Police Commissioner Reece Kershaw in his office in Canberra. Picture: Sean Davey

The tools can also assist in voice phishing for “hi Mum, hi Dad scams”, business email compromise attacks, generating malware and testing for security vulnerabilities.

“The development of malicious AI models by threat actors is in its early stages, but already proving effective and lowers the entry threshold for burgeoning cyber criminals who may lack the technical proficiencies or resources to establish their own cyber criminal tradecraft,” the submission states.

The Australian Institute of Criminology also warns: “AI is ­already being leveraged by criminal actors to upscale and enhance criminal activities, exploit human-centric vulnerabilities and lower the barriers and costs to engaging in criminal activities.”

With Australians reeling from major hacks on corporations including Medibank and Optus, the federal parliamentary joint ­committee on law enforcement is conducting an inquiry into the capability of agencies to respond to cybercrime.

When WormGPT’s existence was revealed in July, it was ­described by cyber security firm SlashNext as being “similar to ChatGPT but with no ethical boundaries or limitations”.

FraudGPT has reportedly been advertised on the dark web as an “unrestricted alternative for ChatGPT”.

The AFP says current trends such as Ransomware-as-a-Service (RaaS) and Malware-as-a-Service (MaaS) have allowed more people to launch cyber ­attacks and scams. “The frequency and severity of cybercrime incidents are expected to increase as a result, placing new demands on the AFP as a law enforcement agency,” the submission states.

Other recent hacks were ­carried out on Victoria’s court system and the St Vincent’s Health network.

Malicious AI was also used to create increasingly realistic deepfakes, lifelike child abuse material and believable disinformation content, the AFP says.

“AI is an emerging technology in child exploitation matters and as it improves, the material is ­becoming more lifelike. This type of material is known as deepfakes, which involve manipulating images, audio and video using AI.”

The AIC’s submission to the ­inquiry states almost one in every two Australians surveyed had been a victim of at least one type of cybercrime in the previous 12 months.

“Cybercrime targeting individual computer users is most frequently a high volume, low yield crime,” the AIC states.

“The high rate of victimisation means that, even with the relatively small median losses per ­victim, the overall cost to Australian individuals is likely to be ­enormous.”

The impact of losses was ­“potentially catastrophic and can have long-term effects on ­victims”.

Cyber criminals were quick to adopt emerging technologies for criminal purposes, the AIC states.

“Artificial intelligence has the potential to facilitate better-targeted, more frequent and widespread criminal attacks, and is already being used for password guessing, CAPTCHA-breaking and voice cloning,” the submission states.

David Murray
David MurrayNational Crime Correspondent

David Murray is The Australian's National Crime Correspondent. He was previously Crime Editor at The Courier-Mail and prior to that was News Corp's London-based Europe Correspondent. He is behind investigative podcasts The Lighthouse and Searching for Rachel Antonio and is the author of The Murder of Allison Baden-Clay.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/politics/chatgpt-for-criminals-to-turbo-charge-scams-say-australian-federal-police/news-story/6dfcbdc500381f3569762a68adf12dfe