AI to spark new wave of powerful cyber attacks as academics blast Albanese government’s plan, warns Ashurst
The rampant rise of AI-powered tools is set to unleash a new wave of cyber attacks, and the Albanese government doesn’t take this threat seriously enough, warns law firm Ashurst.
Criminals are set to harness the power of artificial intelligence to perform more sophisticated and costly cyber attacks this year – and what’s more, most Australian companies won’t know they have even infiltrated their systems.
Global law firm Ashurst issued the warning as the world’s biggest technology companies compete in an AI arms race, placing the much-hyped technology into the hands of billions, including organised crime syndicates.
Cyber crime has been escalating in the past 18 months, with attacks lobbed at some of Australia’s biggest companies, including Optus, Medibank and Latitude Financial, compromising the personal data of millions of customers.
Over the Christmas period, further hacks were unleashed on St Vincent’s Health, ASX-listed car dealer Eagers, Yakult and Victoria’s courts system.
John Macpherson, a partner at Ashurst’s Australian risk practice, said generative AI – which can take billions of data points and create a raft of high-quality content, from photorealistic images to computer coding via simple verbal prompts – had changed the game when it came to cyber security.
“What the bad guys are doing just rapidly evolves,” Mr Macpherson said.
“The ability for them to use generative AI to make their attacks more targeted, harder to detect, more specific … is quite a low bar of entry now to do something malicious.”
Australia’s corporate leaders say the risk of a cyber attack is now the single biggest external threat to the running of their businesses and the top issue that keeps them awake at night, according to The Australian’s 2024 CEO Survey.
Mr Macpherson said hackers could use AI to circumvent security once they enter a company’s system, turning off alerts and alarms. This means companies may not be aware they have been hacked until a criminal demands a ransom.
“In nearly every attack we saw last year in the forensic reports … there was evidence that threat actors, once they’re inside your system, switch off all of the alerts. They turn off your antivirus malware mechanisms – they’re very good now at identifying what security you have in place and what they need to do to circumvent it.”
Generative artificial intelligence – which ChatGPT catapulted into the mainstream, amassing 100 million active users within two months of its launch in 2022 – has also made scams harder to detect.
“It’s not only writing code, but it can write phishing emails or SMS messages,” Mr Macpherson said.
“A lot of us have been trained to identify the Nigerian prince emails, but some of those emails are now getting really difficult. That combination of generative AI to use deep fakes to improve how they can trick humans to behave a certain way, I think, continues to be really concerning.”
Criminal groups from Eastern Europe, North Korea, China – some of whom Mr Macpherson said had links to state-based sponsors – and increasingly South America have found such hacks lucrative.
The Albanese government released its responsible and safe use of AI interim response this week – almost eight months after vowing to launch widespread reforms to govern the technology’s use.
While business groups and tech companies have praised the government’s response, which focused more on strengthening existing laws than creating new ones, it has attracted some criticism that it doesn’t take AI threats seriously enough.
Charles Darwin University associate professor Niusha Shafiabady said: “To me, threats AI and technology pose to people are two types: long-term and short-term. The paper our government has released lacks the long-term threats altogether”.
RMIT’s Nataliya Ilyushina added: “Not having enough regulation can lead to market failure where cybercrime and other risks that stifle business growth, lead to high costs and even harm individuals”.
But Mr Macpherson said AI can also be used to fight AI.
“The flip side is of course, AI gets better and better at helping us detect anomalous behaviour in systems so it makes detection faster.
“We can get across like very large data sets and figure out what’s going wrong, so AI has uses on both sides of the coin.”
The key is to understand the threat landscape and more crucially what data is a company holding so it can be adequately protected.
The stakes are high for a failure to perform these two key tasks, with the Australian Securities and Investments Commission and other regulators putting companies on notice, while aggrieved customers and investors have been quick to lob class actions – which are not only costly financially but also reputationally.
“You need a really robust way of identifying what your key risks are, what your key digital assets are – everyone talks about the crown jewels,” Mr Macpherson said.
“The recent cyber attacks have really brought what digital assets that we hold and just how much we don’t know about that to the forefront.
“It’s not just a matter of, ‘OK, we’re going to delete all of our customer records greater than eight years ago’, pressing a button and it’s all gone. Little bits of those records sit across multiple systems. Just finding where all these and getting rid of us is a huge technical process problem.”
To this end, Mr Macpherson said the days of companies buying a single cyber security solution are over.
“That great temptation to want to buy a product and put it over the top of all your systems and then (think) you’re secure just never works, but that’s often what’s offered in the marketplace.”
Mr Macpherson said a multi-pronged approach, including digital asset and vulnerability audits then a plan to secure those gaps, was vital.
“Of nearly all the big attacks we’ve seen, the root cause that led to the incident was already known to the organisation and that’s very concerning for boards of course because that had regulatory and legal impacts.
“One of the things regulators will look for is when did the board know and understand (a threat) and was the timeliness, budgeting and resourcing to fix known issues appropriate.”