NewsBite

How AI is opening new pathways for hackers to exploit Aussies, while experts are powerless to stop

The rise of AI is opening new pathways for hackers to exploit Aussies – and its rapid evolution is leaving a shortage of experts to stop it.

A cybersecurity expert has revealed how AI is opening new pathways for hackers to exploit Aussies through new scams and cyberattacks, and how there is a shortage of professionals who can combat the threat.
A cybersecurity expert has revealed how AI is opening new pathways for hackers to exploit Aussies through new scams and cyberattacks, and how there is a shortage of professionals who can combat the threat.

Artificial intelligence is opening a “collection” of new pathways for hackers to exploit vulnerable Aussies through new, sophisticated scams or even hacking smart devices – and their rapid evolution is leaving a shortage of skilled cybersecurity professionals to stop it.

The grim warning from Dr Chao Chen, a leading Australian cybersecurity figure, follows a grim trend of sophisticated scams which cost Australians about $2.74bn in losses over the last financial year alone.

In 2020, Aussies only lost $851m in scams reported to Scamwatch, ReportCyber, IDCARE, Australian Financial Crimes Exchange (AFCX) and the Australian Securities and

Investments Commission (ASIC), according to figures from the competition watchdog.

But that figure rose over the Covid-19 pandemic – swelling to $1.8bn in 2021 and then to a record $3.1bn in 2022.

Dr Chen, the deputy director of the Enterprise AI and Data Analytics Hub at RMIT’s College of Business and Law, says AI has led to major technology becoming vastly more personalised and responsive than before, and streamlining daily activities.

But he says it is a double-edged sword – revealing AI-powered tools can automate and scale up cyberattacks and scams.

The rise of AI has led to hackers finding more pathways to exploit vulnerable Aussies, making more sophisticated phishing, ransomware and deepfake scams and using AI to scale up these fraudulent schemes, and even cyberattacks. Picture: Supplied
The rise of AI has led to hackers finding more pathways to exploit vulnerable Aussies, making more sophisticated phishing, ransomware and deepfake scams and using AI to scale up these fraudulent schemes, and even cyberattacks. Picture: Supplied

“While such incidents may have been rare initially, the increasing availability of AI tools and the sophistication of these attacks suggest a rising threat,” Dr Chen said.

“We have already noted a marked increase in AI-enhanced phishing scams, ransomware attacks, and deepfake-related incidents in the past few years.”

As early as this month, NSW Police issued an alert over a disturbing scam where AI was used to create fake videos or messages resembling a person’s loved ones or celebrities to dupe people into fake investments.

Hunter Valley resident Gary Meachen was one such victim – losing his $400,000 nest egg to a Facebook investment scam which appeared to have been spruiked by billionaire Elon Musk, Prime Minister Anthony Albanese, former prime minister Julia Gillard, among other celebrities.

Hunter Valley resident Garry Meachum lost his life savings to an AI-generated depfake scam which claimed to have been endorsed by people like Elon Musk. Picture: Supplied / A Current Affair
Hunter Valley resident Garry Meachum lost his life savings to an AI-generated depfake scam which claimed to have been endorsed by people like Elon Musk. Picture: Supplied / A Current Affair
Mr Meachum claimed the ad showed reputable celebrities claiming to back the investment but in reality they were deepfaked clips and images of the figures. Picture: Supplied / A Current Affair
Mr Meachum claimed the ad showed reputable celebrities claiming to back the investment but in reality they were deepfaked clips and images of the figures. Picture: Supplied / A Current Affair

Others include using the likeness of local politicians – with Sunshine Coast Mayor Rosanna Natoli revealing her likeness was used in one such scam as a fake profile bearing her name and image attempted to contact people over Messenger for their banking details.

Others use AI to generate the likes of people like renowned science communicator Dr Karl Kruszelnicki to promote bogus health scams, or pictures of TV star David Koch being featured in clickbait-style images.

A more worrying trend emerges overseas, with British engineering giant Arup was revealed as the company which fell for a sophisticated deepfake scam that resulted in a Hong Kong employee transferring $US25m to fraudsters.

Dr Chen said the AI tools involved in creating these fake images, videos and voices behind the scams would allow them to “automate and scale up” cyberattacks and even make them harder for authorities to detect.

He pointed to research which found AI could scan for weaknesses in software and networks, allowing them to identify potential points of entry with greater speed and accuracy than manual methods used by hackers.

Dr Chen said another risk was AI being exploited by hackers who could introduce “subtle, often imperceptible modifications” to input data.

He said this would result in AI models making incorrect predictions or classifications.

One such example occurred almost a decade ago after Tay, an AI chatbot designed by Microsoft, was manipulated by users to tweet racist, sexist remarks and references to Adolf Hitler.

Dr Chen said hackers could also make subtle “often imperceptible” modifications to input data used by AI, which he said would allow AI models making incorrect applications in vehicles, medical diagnosis and financial fraud detection. Picture: Supplied
Dr Chen said hackers could also make subtle “often imperceptible” modifications to input data used by AI, which he said would allow AI models making incorrect applications in vehicles, medical diagnosis and financial fraud detection. Picture: Supplied

“These adversarial examples can mislead AI systems in critical applications such as autonomous vehicles, medical diagnosis, and financial fraud detection,” Dr Chen said. “Moreover, by analysing the outputs of an AI model, attackers can infer sensitive information about the training data.

“For instance, inverting a facial recognition model could allow hackers to reconstruct images of individuals used in the training dataset.”

A recent study from the Swiss-based Idiap Research Institute tested such a scenario, with researchers developing a template with “pre-trained geometry-aware face generation network” and mapping trained on real and synthetic faces.

Dr Chen said cybersecurity experts were already seeing a marked increase in AI-enhanced phishing scams, ransomware attacks and deepfake-related incidents across Australia in recent years.

The Australian Competition and Consumer Commission’s (ACCC) latest report into scams across 2023 found Australians lost $1.3bn to bogus investment scams over the last year alone.

And despite people being more aware of scams and the potential for AI to turbocharge them, Australia was facing a “significant” shortage of skilled professionals knowledgeable in AI technologies, Dr Chen said.

In another twist, Dr Chen revealed there was a shortage of skilled cybersecurity professionals versed in AI to combat the looming threat. Picture: Chris Delmas / AFP
In another twist, Dr Chen revealed there was a shortage of skilled cybersecurity professionals versed in AI to combat the looming threat. Picture: Chris Delmas / AFP

“The rapid evolution of AI-driven threats makes it difficult for traditional security measures to keep up,” he said.

“Additionally, organisations face regulatory and compliance challenges as they strive to adhere to evolving standards, which can be particularly resource-intensive for smaller businesses.

“Cybersecurity firms must focus on technological advancements, such as developing and deploying AI tools to identify and mitigate AI-generated scams.

“Establishing rapid response teams to handle AI-generated scams and conducting post-incident analyses will further strengthen the industry’s ability to counter these threats.”

The fear of AI being exploited is something shared by the federal government.

“AI presents risks such as malicious actors using AI tools to expand their activities, such as using generative AI to produce low effort, high quality material for phishing attacks,” a spokesman from the Australian Signals Directorate (ASD) said.

AI uses QLD Premier Steven Miles' voice for scam investments

“For individuals using AI technologies, particularly generative AI, ASD recommends applying the same basic security principles as they would when using any online tool, including strong passphrases, updating device software promptly and introducing multi-factor authentication of identity to log into online accounts.”

The ASD routinely updates the Information Security Manual (ISM), which provides advice to government and businesses on how best to protect their systems and data.

The ISM includes recommendations for AI tools, services and implementation in the Guidelines for Personnel Security; Procurement and Outsourcing; Cyber Security Incidents; and Software Development.

Originally published as How AI is opening new pathways for hackers to exploit Aussies, while experts are powerless to stop

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.themercury.com.au/technology/online/how-ai-is-opening-new-pathways-for-hackers-to-exploit-aussies-while-experts-are-powerless-to-stop/news-story/9382e9b83ed0cbbc8249f6f09c85d9dc