NewsBite

AI helps US Intelligence track hackers targeting critical infrastructure

Chinese hackers, trying to burrow into US ports and pipelines, use techniques that would be difficult to detect without AI, NSA cybersecurity official says.

AI can aid in committing crimes, but won’t necessarily create new forms of crime
AI can aid in committing crimes, but won’t necessarily create new forms of crime

NEW YORK — US intelligence authorities are using AI to pick up on the presence of hackers trying to infiltrate and attack American critical infrastructure — and identifying signs of hackers using AI themselves in the attacks.

At a conference this week, cybersecurity leaders discussed burgeoning aspects of AI use by hackers — as well as by law enforcement. Rob Joyce, cybersecurity director at the US National Security Agency, said machine learning and artificial intelligence are helping cybersecurity investigators track digital incursions that would otherwise be very difficult to see.

Specifically, Chinese hackers are targeting US transportation networks, pipelines and ports using stealthy techniques that blend in with normal activity on infrastructure networks, Joyce said, speaking at Fordham University in New York.

These methods are “really dangerous” as their aim is societal disruption, as opposed to financial gain or espionage, Mr Joyce said. The hackers don’t use malware that common security tools can pick up, he added.

“They’re using flaws in the architecture, implementation problems and other things to get a foothold into accounts or create accounts that then appear like they should be part of the network,” Mr Joyce said, referring to what are called living-off-the-land techniques.

Law-enforcement authorities have warned of the danger AI poses in the hands of hackers, who can do exponential damage with it. On the defence side, some security experts are also concerned that the growing sophistication of AI could result in rich companies having better cyber defences than businesses that can’t afford them.

In a recent interview, Amazon.com Chief Security Officer Stephen Schmidt said the company is going “super aggressive” in using AI in cybersecurity defences. “I think that generative AI has the potential to become an indispensable tool in the hands of security engineers.”

Government authorities and financial institutions need to invest in technologies that will detect deepfake images and videos generated with AI and used for fraud, said Breon Peace, US Attorney for the Eastern District of New York, speaking at the conference.

Already cybercriminals and nation-state hackers are using generative AI tools to ensnare more victims, Joyce said. For instance, in phishing emails: In the past, one telltale clue was poor English. Now, corporate cybersecurity leaders say they have seen phishing attempts that appear flawless.

Popular generative AI tools like OpenAI’s ChatGPT and Google’s Bard have protections to prevent users from creating malicious content.

Yet the guardrails on generative AI are “fairly easy to shake,” said Damian Williams, US Attorney for the Southern District of New York, speaking at the same conference. He didn’t refer to any specific companies or tools.

Hackers are also creating their own generative AI tools using open-source models, training them with their own datasets and selling the tools on dark-web marketplaces, said Maggie Dugan, an intelligence analyst at the Federal Bureau of Investigation.

Mr Williams said AI can aid in committing crimes, but won’t necessarily create new forms of crime. “Fraud is fraud. It’s kind of old wine in new bottles.”

The Wall Street Journal

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/the-wall-street-journal/ai-helps-us-intelligence-track-hackers-targeting-critical-infrastructure/news-story/952e1acde007993dd6adac003858d745