NewsBite

AI v AI: who wins the cyber battle?

AI granny Daisy revealed that AI-generated voices are now so realistic that even seasoned scammers can’t always tell the difference.

An AI-generated persona can be designed for good, but what if this technology were used for harm?
An AI-generated persona can be designed for good, but what if this technology were used for harm?

Last year the internet lit up with stories about Daisy, the so-called “AI granny” who spent hours chatting with scam call centres. Her purpose was simple but clever: to waste scammers’ time by keeping them tied up on the phone so they had less time to target real victims.

While amusing on the surface, Daisy revealed something far more serious. AI-generated voices are now so realistic that even seasoned scammers can’t always tell the difference. Some spoke to Daisy for more than 40 minutes without realising they were talking to a machine.

Daisy was designed for good, but her existence raises an obvious question: what if this technology were used for harm? This question is no longer hypothetical. In the cybersecurity domain, AI is rapidly changing the risk but can also be the solution.

In the past, cyber attacks typically focused on standard technology interfaces. Hackers would exploit weaknesses in systems, such as customer portals, or send typo-ridden phishing emails filled with malware that could be detected with signature-based scans.

David Owen is Partner, Cyber, Deloitte Australia
David Owen is Partner, Cyber, Deloitte Australia

But now AI has changed the game. It enables criminals to better target people and weaknesses in process design, not just software and servers. They are picking up the phone, sending AI-forged documents and using AI tools to impersonate real individuals in ways that can be difficult to detect.

Case in point: deepfake technology was recently used by scammers to impersonate senior managers of a multinational company on a video call, successfully tricking a Hong Kong-based employee into wiring millions of dollars.

Criminals are also using AI to attack other weak points. In industries such as financial services, it is common for customer transactions to involve a mix of steps across different platforms. These may include identity verification, form submission, email correspondence, document uploads and phone conversations. Criminals can now use AI to scale attacks across these channels, increasing the odds of success.

They might begin by scraping the internet for names, job titles, or breached login details. From there, AI can be used to generate fake identity documents or produce pre-filled forms that request changes to contact details or banking information. These materials can be submitted through legitimate channels and often appear no different from genuine customer requests.

This type of multichannel attack presents new challenges. However, with the aid of AI, there are three critical steps organisations can take to better prepare.

First, risk assessments must become broader and more realistic. These assessments should consider how an adversary might target the organisation through all available channels, not just through IT systems.

Forward-looking organisations are now running simulations that mimic the tactics and technologies used by modern cybercriminals, including AI-generated voice calls or document forgery. In these scenarios, AI can assist by identifying unusual patterns in large datasets (stopping attacks in real time) and by simulating attack scenarios to uncover hidden vulnerabilities.

Second, external threat intelligence should play a central role in both risk management and awareness efforts. AI tools can help aggregate and analyse this threat intelligence at speed and scale, identifying patterns humans might miss.

This intelligence should feed directly into updated risk assessments and process design, and be used to inform training and awareness campaigns for frontline staff.

Awareness programs should not rely on static training modules. Instead, they should include short, regular updates using real-world case studies. These stories make the threat feel more relevant and can help employees spot similar behaviour in their own day-to-day work. Training should also be role-specific. What a contact centre agent needs to watch for is not the same as what an executive assistant or finance officer needs to know.

Finally, many organisations still have disconnected systems, teams and case management for managing digital threats, fraud and customer service. To detect a co-ordinated, multichannel attack, these teams need to have consolidated systems and processes.

AI can support this integration as it has the capabilities to notice irregularities occurring in different parts of an organisation – such as a surge in failed log-in activity (a sign of password spraying), unusual patterns of calls to the contact centre, followed by fraudulent refund claims in customer service systems. To achieve this, it needs to have visibility across all channels, rather than siloed to traditional “cyber” channels.

Machine learning tools can help identify unusual behaviour, but human collaboration remains essential. When paired with expert analysis, AI-driven insights can significantly accelerate response times and improve decision making.

Creating converged approaches for cyber operations, fraud analysts and customer support to flag and escalate suspicious activity together is just one of the meaningful steps organisations can take to bolster preparedness in the face of a changing cybersecurity landscape. Cyber criminals are adapting cutting-edge AI technologies – organisations must do the same or risk being left behind.

David Owen is Partner, Cyber, Deloitte Australia.

 

-

Disclaimer

This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional adviser. 

Deloitte shall not be responsible for any loss sustained by any person who relies on this publication. 

About Deloitte

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. Please see www.deloitte.com/au to learn more.

Copyright © 2025 Deloitte Development LLC. All rights reserved.

-

Original URL: https://www.theaustralian.com.au/business/tech-journal/ai-v-ai-who-wins-the-cyber-battle/news-story/c5fe3f1447d7c85cef1375ab2f0e3dab