McDonald's AI breach exposes major security risk for Aussie businesses
A McDonald’s chatbot protected by a simple password has exposed how companies are gambling with security in their race to adopt artificial intelligence.
Artificial intelligence is rapidly transforming the modern workplace, poised to become the “next colleague” for millions of employees.
But, this technological leap, while promising, also carries a significant risk: it could become a “security nightmare,” warns leading identity management firm Okta.
It isn’t scaremongering, chief executive Todd McKinnon said fast food behemoth McDonald’s served up a cautionary tale two months ago when it inadvertently exposed millions of job applicants’ personal information to hackers.
The cause? Maccas’ AI agent Olivia. This AI, designed to collect personal data and even conduct personality tests, was rushed into deployment with a glaring vulnerability: a shockingly simple ‘123456’ password and a complete absence of multi-factor authentication (MFA).
Mr McKinnon said McDonald’s is not an isolated case. Even Australia’s largest industry superannuation funds, without AI, failed to secure members’ accounts with MFA despite repeated regulatory warnings, allowing criminals to siphon hundreds of thousands of dollars in retirement savings.
Mr McKinnon said identity verification was a fundamental step bolstering security but was often dangerously overlooked, particularly in the rush to deploy AI as a ‘FOMO’ or ‘fear of missing out’ mentality grips businesses.
Underscoring the risk, Mr McKinnon says an Okta study revealed 91 per cent of businesses are already leveraging AI agents, yet only 10 per cent possess a well-defined governance strategy.
“Every CEO and every board of directors is pushing their team to do more with AI,” Mr McKinnon said.
“Everyone wants to be an AI company, and they want to adopt AI, and they want to benefit from it. The pressure to move fast, to do it before your competition, there is a lot of keeping up with the Joneses in terms of seeing what people are doing, and wanting to be the first one to have these capabilities. And I think that’s a risky combination.”
This widespread enthusiasm often leads to a dangerous acceleration in deployment that outpaces the establishment of robust security protocols.
Mr McKinnon said in their haste to deploy AI, companies are “making it up as they go along” when constructing AI agents. These agents are frequently granted extensive permissions within development environments, with security measures being an afterthought, only to be rushed into live production.
The McDonald’s bot, initially in a “lab environment” and deemed to hold “not real data,” was fast-tracked into operations, carrying its glaring vulnerability with it.
Mr McKinnon said the integration of these intelligent agents into daily operations, if mishandled, presents a “security nightmare waiting to happen.”
Unlike conventional software, he said AI agents are “non-deterministic,” meaning their behaviour can be unpredictable, making them uniquely susceptible to sophisticated manipulation by malicious actors.
Mr McKinnon likened them to a “combination between a person and an application,” embodying the access privileges of an application with the unpredictable nature of a human.
He said the Okta study underscores how pervasive this issue has become, with this significant governance deficit especially perilous for sectors handling highly sensitive information, such as “big healthcare companies, big financial services companies that have real data, that’s risky”.
Paradoxically, smaller businesses, by often relying on established, off-the-shelf solutions, might be “less susceptible” to these complex AI-related breaches compared to larger enterprises that mistakenly believe they can “move the needle on this AI stuff” by developing bespoke solutions.
For the average consumer, the advice is straightforward but crucial: be wary of the “bleeding edge” and “be careful what information you share with services.”
Mr McKinnon acknowledged that in the highly interactive world of chatbots, consumers are “over-sharing” personal information without full awareness.
The conversational nature of AI interactions makes it “pretty hard to not divulge some of this personal information, or at least things that you’d otherwise not want to divulge.”
The path forward, according to Mr McKinnon, necessitates a fundamental recalibration toward a unified identity security fabric.
This approach ensures every AI agent is meticulously managed, rigorously monitored, and granted only the bare minimum access required for its designated tasks.
“You have to get identity right in order to get AI right,” Mr McKinnon said.
“Five or six, seven years ago, everyone realised that with everything moving to the cloud, you had to really have good identity management. Just have security. Firewall wasn’t going to save you. Endpoint wasn’t going to save you. And this is this whole zero trust thing. People said zero trust, all this identity, was really important.
“We’re coming to the same point with AI, which is we all know this stuff’s very powerful. It’s a huge opportunity. But, there’s these risks, and a lot of them are identity security risks.”

To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout