NewsBite

AI can be ‘breached in minutes’, warns Tenable’s top security chief

Corporate AI systems face catastrophic security flaws that could devastate businesses, with hackers exploiting trusted platforms to steal private messages and confidential documents within minutes.

Top security chief Robert Huber says corporate AI can be ‘breached in minutes’.
Top security chief Robert Huber says corporate AI can be ‘breached in minutes’.

A top cybersecurity executive is sounding the alarm on an immediate threat to corporate data, revealing that leading enterprise artificial intelligence platforms can be breached in minutes, allowing hackers to access private messages and sensitive corporate documents.

Robert Huber, chief security officer for Tenable and a former adviser to the US National Security Agency, described the risk of “algorithmic insiders” — generative AI systems granted broad, privileged access to internal data — as the next major breach vector businesses face.

“I personally look at it almost like a digital employee, if you will. So it’s a persona that exists in the environment and that has some capabilities on its own,” Mr Huber said.

Tenable chief security officer Robert Huber.
Tenable chief security officer Robert Huber.

The danger is that this “digital employee” has sweeping access to all data repositories — from HR systems to email to instant messaging — and its trust can be exploited.

The critical vulnerability, Mr Huber says, is a technique known as “jailbreaking,” which manipulates the AI’s prompts to force it to override its internal security guardrails and retrieve data it shouldn’t.

It comes as security professionals grapple with the implications of a recent cyber espionage campaign executed by Chinese state-sponsored hackers, which used AI tools from the US company Anthropic to automate attacks on about 30 global entities, including financial institutions and major technology firms.

Mr Huber said the Anthropic campaign was a wake-up call, demonstrating the speed and scale with which adversaries can now operate.

“The Anthropic attacks, quite honestly, from a security perspective, there is nothing novel or remarkable. It was just the speed and the ease with which those attacks could occur and the scale that was unique.”

The hackers manipulated Anthropic’s “Claude Code” tool into performing approximately 80 to 90 per cent of the tactical operations independently, achieving an attack speed that would be physically impossible for a team of human hackers. The method of attack was a form of “jailbreaking,” where hackers masked their malicious intent by adopting a role-play persona, convincing the AI it was an employee of a legitimate cybersecurity firm.

Mr Huber said this method exposes an immediate and profound risk for all organisations integrating AI.

He said Tenable performed its own internal penetration tests on commercial large language model applications currently being evaluated by large enterprises. Mr Huber said his team successfully executed a jailbreak on multiple platforms in a matter of minutes, accessing highly sensitive corporate data.

The successful breaches accessed private messages between employees for which the tester had no native permission and revealed sensitive documents from a Google-integrated LLM.

“Imagine if that’s a privileged communication with a lawyer or human resources regarding an employee matter or even a customer. That’s significant,” Mr Huber said.

The core vulnerability stems from the fact that the LLM is often granted the access level of a privileged user across all data sources. While traditional role-based access controls should prevent human employees from seeing certain data, those controls fail when the AI agent itself is manipulated.

“From a security professional standpoint, I have very limited controls available to me to enforce to ensure that that does not occur,” Mr Huber said.

Top security chief Robert Huber warns corporate AI can be ‘breached in minutes’. Picture: iStock
Top security chief Robert Huber warns corporate AI can be ‘breached in minutes’. Picture: iStock

The problem is compounded by a wave of “stealth AI” adoption, where new capabilities are being introduced by vendors natively into existing corporate platforms – like a customer relationship management platform or HR systems – without the company’s knowledge or a formal security review.

“Platforms or technologies implement AI within the solution, and we’re not even really aware that it’s been turned on or enabled,” Mr Huber said, citing an example where a chief information security officer only learned a major software vendor had enabled an AI feature when it simply turned on.

Furthermore, Mr. Huber pointed to a critical operational challenge seen in the Anthropic attack: hallucination.

Although the Chinese hackers successfully automated tasks, investigators found the AI often fabricated data or overstated its findings, requiring human validation. Internally, Mr Huber’s team found the same issue when jailbreaking corporate LLMs, where truthful, sensitive disclosures were often mixed with false inferences.

“If somebody takes action [in] the organisation based on information, that it’s false information,” he cautioned, turning the security risk into an organisational risk.

To mitigate this rapidly expanding threat surface, Mr Huber insisted that Australian executives must immediately establish a governance framework for sanctioned AI applications.

“The organisation should adopt a governance framework of how they create a process for the approved use of sanctioned AI applications,” he said. This includes demanding security and legal reviews and then enforcing a policy that treats the AI like a human employee, complete with rigorous “onboarding and offboarding” processes to ensure the privileged agent is not running rogue in the environment.

Originally published as AI can be ‘breached in minutes’, warns Tenable’s top security chief

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.goldcoastbulletin.com.au/business/ai-can-be-breached-in-minutes-warns-tenables-top-security-chief/news-story/08315c52d048eabd1c51cdd8a6482d68