NewsBite

The new hacking ‘nuclear bomb’: How AI is stealing your money

World-class cyber crime expert lifts lid on startling new technology being easily adopted by criminals. Here’s what you need to watch out for.

CyberCX executive director of cyber intelligence Katherine Mansted.
CyberCX executive director of cyber intelligence Katherine Mansted.

Cyber criminals are tapping into the power of artificial intelligence, with experts warning businesses to brace for more sophisticated and widespread attacks that could have far more catastrophic consequences than the recent Qantas breach.

Experts say it’s likely AI was used to carry out the vishing – voice phishing – attack on the Qantas call centre in Manila, where an attacker impersonated an employee to gain access to the personal information of millions of the airline’s customers.

It’s part of a worrying trend where scammers are using AI to trick their targets – from using deepfake voices, images and video to deceive victims, to cracking passwords much faster, and creating more personalised and sophisticated phishing emails that are far more convincing than the Nigerian prince scams.

Katherine Mansted, executive director of cyber intelligence at cyber security firm CyberCX, says AI tools enable cyber criminals to improve the realism of social engineering attacks designed to exploit human vulnerabilities.

The recent cyber attack on a Qantas call centre affected close to 6 million of the airline’s customers. Picture: David Gray / AFP
The recent cyber attack on a Qantas call centre affected close to 6 million of the airline’s customers. Picture: David Gray / AFP

“Social engineering is as old as the hills, but AI lets criminals get a lot better at their social engineering game,” she says.

“Use of deepfakes, for example, voice-based phishing using AI-generated voice to trick people into giving over access, giving over data.

“And there’s been some high profile examples where CFOs, or businesses, have been tricked out of huge amounts of money, multimillion-dollar amounts of money, because they thought they were engaging with a real person, but they were engaging with a deepfake impersonating that person.”

In February last year global engineering firm Arup fell victim to one of the biggest AI-driven cyber attacks, when a Hong Kong-based employee was duped by deepfake voices and images into making a £20m ($41m) payment to criminals.

The finance worker attended a video call with people he believed to be senior management at the company, but who were in fact deepfake creations.

Closer to home, a recent report on global phishing trends from cybersecurity company Zscaler, found Australia was the eighth most targeted country with more than 30 million phishing attempts recorded during 2024.

CyberCX executive director of cyber intelligence Katherine Mansted says AI is helping cyber criminals to “get a lot better at their social engineering game”. Picture: NCA NewsWire / Gary Ramage
CyberCX executive director of cyber intelligence Katherine Mansted says AI is helping cyber criminals to “get a lot better at their social engineering game”. Picture: NCA NewsWire / Gary Ramage

It warns that the accessibility of AI tools has “lowered the barrier to entry” for creating convincing deepfakes, making them a growing threat in ‘spear phishing’ campaigns designed to trick specific organisations or individuals into disclosing sensitive information.

But Ms Mansted says the increasing sophistication of cyber attacks is just one part of the AI problem, describing the speed and scale at which cybercriminals can now identify and exploit vulnerable targets as the “more nefarious way we’re seeing AI affect cybercrime”.

“You’d be hard pressed to find a good guy – an individual, small business or a big business – that’s not AI curious or actively investing in AI to increase their efficiency. And it’s exactly the same in cybercrime ecosystems,” she says.

“AI gives you the ability to scale really quickly, often really cheaply. And you don’t need to be the best. You just need to have volume.

“We’re seeing some of these cyber criminals probe across the economy for vulnerabilities at scale. So previously, where you needed a human in the loop to do that, now you can detect weaknesses in networks at machine speed.

“What does that mean? It means that it’s more of the same, but it’s just at a higher rate. And more of a volume, which means that defenders need to be more on their toes than ever before.”

Peter Soulsby, head of cyber security at IT company Brennan, says it’s only the start of a worrying trend that could get worse once the human behind the criminal activity loses control.

“It used to take criminals a couple of days, if not weeks, to craft email campaigns and get them out there. The use of AI is making that time to weaponise a phishing campaign go from days and weeks to hours, if not minutes,” he says.

The Qantas breach was the latest in a long line of cyber attacks on large Australian companies, including Optus and Medibank. Picture: Lisa Maree Williams/Getty Images
The Qantas breach was the latest in a long line of cyber attacks on large Australian companies, including Optus and Medibank. Picture: Lisa Maree Williams/Getty Images

“Criminals are certainly using AI at scale ... and we aren’t too far away from seeing AI as a weapon being unleashed as an attack.

“The concept of an AI machine attacking an organisation or infrastructure or applications, it’s coming, it’s inevitable. It’s probably still a year or two away, but when it does happen, there’s going to be unknown consequences that are very difficult to manage or predict.

“In theory, you should have someone at the helm of whatever you’ve built with AI, but in practice ... the criminals will lose control of their AI pretty quickly, is what I suspect will happen, and at that point the impact becomes unquantifiable.

“Just like the first use of a nuclear bomb – people didn’t quite know what a nuclear bomb was going to do, and AI is going to have the impact, scale and significance that is difficult to predict.”

The Australian Signals Directorate, the government agency overseeing the nation’s cybersecurity, warns the increasing prevalence of AI means Australia “must be responsive to an ever-changing cyber threat landscape”.

It says the usual security principles like the use of strong passwords, regular and prompt device updates and multi-factor authentication still apply, along with the training of individuals to be suspicious of unusual activity.

Ms Mansted says it’s that human element that will be most important in the fight against AI-armed cyber criminals.

“For small to medium businesses ... it might mean they need to change how they do their cybersecurity training, particularly when it comes to that deepfake piece. You really need to know that we now live in a world where seeing isn’t believing,” she says.

“As a nation we’ve lived through your Optus and your Medibank moments that have put cyber on the agenda for all of us – individuals up to the big end of town.

“Probably where there’s the least preparedness is the new threats that come from rolling out AI solutions too quickly. AI can be really good for customer outcomes, but AI also creates a new and complex attack surface that creates new threats.

“The key message is that as you roll out AI, you must do it with security at the beginning. We made this mistake with the internet, where we focused on useability and efficiency over security. We can’t make that mistake again with the rollout of AI because the stakes now, and the consequences, are so much higher than they ever have been.”

Hackers followed through with a threat to leak highly personal health data of Medibank customers, posting them on the dark web. Picture: Gaye Gerard
Hackers followed through with a threat to leak highly personal health data of Medibank customers, posting them on the dark web. Picture: Gaye Gerard

Recent examples of AI-driven cyber attacks

Arup deepfake case

In February 2024, fraudsters used an AI-generated deepfake to steal £20m ($41m) from global engineering firm Arup. The attackers used AI-generated video and audio to impersonate the firm’s CFO and senior executives during a conference call, tricking a finance employee into transferring the funds across 15 transactions.

Qantas vishing attack

Experts say it’s likely AI was used to carry out the vishing – voice phishing – attack on the Qantas call centre in Manila, where an attacker impersonated an employee to gain access to the personal information of millions of the airline’s customers. It compromised information including names, email addresses, phone numbers, dates of birth and frequent flyer numbers, and was the latest in a series of cyber attacks on large companies in Australia, after the attacks on Optus and Medibank in 2022, and more recently on some of Australia’s largest super funds.

Deepfake crypto scams

YouTube has become a target for hackers, with crypto-related channels being hijacked by cyber criminals. Since at least mid-2024, a more sophisticated version of the scam has emerged, aided by deepfakes. The scammers use deepfakes of prominent figures – often Elon Musk – to make them appear to be promoting the specific scam website, for example, to say they’ll double your investment. In one case an Australian man lost $80,000 in cryptocurrency after seeing a deepfake Elon Musk video interview on social media, clicking the link and registering his details through an online form, only to be locked out of his account.

Midnight Blizzard phishing attacks

In 2023, Russian hackers posing as tech support staff in Microsoft Teams chats launched AI-driven phishing attacks to steal login credentials from dozens of global organisations. Known as Midnight Blizzard, the attackers are believed to be linked to Russian foreign intelligence. Later in 2024, Midnight Blizzard, also known as APT29 and Cozy Bear, used AI to craft spear phishing emails sent to thousands of targets in over 100 organisations. They contained a signed Remote Desktop Protocol (RDP) configuration file that connected to a criminal-controlled server. In some cases, the cyber criminals impersonated Microsoft employees.

UAE voice-cloning scam

In 2021 a United Arab Emirates bank was defrauded of $US35 ($53m) after a branch manager took a call from someone he thought was a director at head office, requesting the prompt transfer of funds to finalise an acquisition. The request was backed up by emails naming a lawyer who had been authorised to coordinate the deal. The manager complied, only to find out later, following an Emirati investigation, that criminals had used AI voice cloning technology, supported by fake emails, to dupe the manager.

Five tips to protect your business from AI cybercrime

1. Fight AI cybercrime with AI-driven security tools. These tools use machine learning to monitor user behaviour, detect anomalies, and automate threat responses, and can help identify suspicious activity that traditional systems may miss.

2. Implement a ‘zero trust architecture’ security framework. The principle assumes that no user, device or network connection is inherently trustworthy, regardless of whether they’re inside or outside an organisation, and enforces strict and continuous verification and authentication for every user and device.

3. Workforce education. Human error remains one of the leading causes of successful cyber attacks. Training employees to recognise phishing attempts, deepfakes and suspicious behaviour is a crucial tool to combatting AI-driven cybercrime.

4. Real-time threat intelligence is crucial. This gives businesses visibility into the latest cyber attack trends and indicators of compromise, enabling them to update their defences proactively and avoid falling victim to emerging tactics.

5. Partner with a dedicated cybersecurity expert. Managing cybersecurity in-house can be difficult, especially for small and medium-sized businesses. Partnering with an external expert can provide access to 24/7 monitoring, incident response and tailored defence strategies.

Originally published as The new hacking ‘nuclear bomb’: How AI is stealing your money

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.goldcoastbulletin.com.au/news/south-australia/the-new-hacking-nuclear-bomb-how-ai-is-stealing-your-money/news-story/5243862009412eac67a05742a04b8f39