NewsBite

’Poison’ AI threat to national security, cyber centre warns

Hackers could exploit data that systems including ChatGPT rely on, feeding false information into platforms used for everyday activities like medical and financial advice.

A CSCRC report released on Monday warns that ChatGPT and similar AI systems are ‘pre-trained’ in an unsupervised manner using huge data sets to generate responses to questions from human users.
A CSCRC report released on Monday warns that ChatGPT and similar AI systems are ‘pre-trained’ in an unsupervised manner using huge data sets to generate responses to questions from human users.

Malicious state actors and cyber criminals could “poison” artificial intelligence data sets, posing a threat to the security and safety of Australians unless the government implements immediate safeguards, a report has warned.

Australia’s Cyber Security Cooperative Research Centre has sounded alarm over the potential for hackers to exploit data that AI systems including ChatGPT rely on, feeding false information into platforms used for everyday activities like medical and financial advice. This would result in incorrect outputs, which could cause significant harm to governments, organisations and citizens.

A new CSCRC report released on Monday warns that ChatGPT and similar AI systems are “pre-trained” in an unsupervised manner using huge data sets to generate responses to questions from human users.

“Currently, vast amounts of training data are scraped from the open internet without moderation, which means data sets inevitably include offensive, inaccurate or controversial content,” the CSCRC says in its report, Poison the Well.

Cyber hacks on Crown and Latitude are ‘absolutely concerning'

“Any threat to AI data inputs presents a threat to the integrity of an AI system itself and, more worryingly, could have serious societal impacts. Hence, cyber attacks aimed at manipulating AI data sets must be considered as a serious emerging cyber threat vector.”

The CSCRC said the manipulation of such data sets was known as “data poisoning”, with cyber criminals looking to input incorrect data into AI systems, skewing results and undermining public trust.

CSCRC chief executive Rachael Falk said she was concerned the strategy would be used in foreign interference activities, with malicious actors using the technique to “shape the narrative” and cause major societal upheaval.

The report will help inform Defence Minister Richard Marles and Industry Minister Ed Husic when they attend the world’s biggest AI summit in Britain this week.

The two-day AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park where Alan Turing cracked Nazi Germany's Enigma code during World War II, is the first major AI enforcement gathering of its kind.

World leaders at the summit will discuss the threats of AI and global efforts to regulate and put guardrails in place.

US President Joe Biden this week is also expected to authorise an executive order overseeing regulations and rules enforced by key federal agencies.

The Australian understands that the Albanese government’s seven-year Cyber Security Strategy, which focuses on Australia’s digital security requirements amid a rise in AI technologies, will be released before the end of next month.

Ambassador Kevin Rudd, Microsoft Australia-NZ managing-director Steven Worrall, Microsoft President Brad Smith, Anthony Albanese and US Ambassador to Australia Caroline Kennedy at the announcement in Washington of a $5bn investment in artificial intelligence and cloud computing operations. Picture: Geoff Chambers
Ambassador Kevin Rudd, Microsoft Australia-NZ managing-director Steven Worrall, Microsoft President Brad Smith, Anthony Albanese and US Ambassador to Australia Caroline Kennedy at the announcement in Washington of a $5bn investment in artificial intelligence and cloud computing operations. Picture: Geoff Chambers

Ms Falk called for Australia to implement an “AI act” similar to what is being considered in the EU, which would introduce regulations around the data sets that ChatGPT and other platforms could use.

“We need to not sit back and wait to regulate,” Ms Falk said. “The riskier parts where we want government to have greater oversight or guardrails … (include) the health sector. I think government can step in earlier.”

She said Australia had been “too slow” to set rules to ensure there were no unintended side effects to new technologies and needed to act fast or risk major cyber breaches and foreign interference attacks via AI platforms.

Politico, which obtained a draft White House executive order on AI, said Mr Biden would “deploy numerous federal agencies to monitor the risks of artificial intelligence and develop new uses for the technology while attempting to protect workers”.

Mr Husic in May released the government’s Safe and Respon­sible AI in Australia discussion paper canvassing existing regulatory and governance responses in Australia and overseas. The paper will inform Australia’s response to the rapid rise of AI.

Read related topics:China Ties

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/poison-ai-threat-to-national-security-cyber-centre-warns/news-story/90087b322a28382a04a0ca02039bb734