Senate inquiry calls for more government regulation of generative AI
Hackers could ‘jailbreak’ AI models to steal information for industrial-scale data breaches and ransomware attacks, a Senate inquiry has been told.
Home Affairs has warned of lethal security threats from “malicious’’ artificial intelligence (AI) used to build bioweapons and bombs and create synthetic drugs.
Hackers could “jailbreak’’ AI models to steal information for industrial-scale data breaches and ransomware attacks, the federal government department has told a Senate inquiry into generative AI.
The threat was revealed as the Senate Select Committee on Adopting AI tabled its first report, recommending economy-wide regulation of AI technology.
“The use of deepfakes and other AI-generated material or AI tools to harm democracy, sow dissent and erode trust in public institutions is perhaps one of the most significant risks of AI,’’ the report states.
“AI systems could be used for designing chemical weapons, exploiting vulnerabilities in safety-critical software systems, synthesising persuasive disinformation at scale, or evading human control.’’
A million workers in creative industries are victims of “unprecedented theft of their work by multinational tech companies operating within Australia’’, the report states.
It calls on the federal government to force AI developers to be “transparent about the use of copyrighted works in their training datasets, and that the use of such works is appropriately licensed and paid for’’.
The report highlights a Department of Home Affairs warning that AI will make it easier to create convincing and personalised phishing campaigns to fleece consumers – even by criminals who are not cyber-savvy.
“The growing demand for subscription-based criminal models, commonly referred to as ‘crime-as-a-service’, will be further enhanced in accessibility and efficiency through the integration of AI,’’ Home Affairs says in its submission.
“AI may also support increased outsourcing of malicious cyber activity from nation states to criminal syndicates.
“AI will lower the barriers for non-sophisticated actors, equipping them with previously unattainable capabilities such as access to instructions to develop bioweapons, synthetic drugs or explosive devices.’’
Ahead of the federal election due by May next year, Home Affairs warned that AI trickery could influence voter and political sentiment.
“Generative AI allows anyone – from passionate citizens to malicious actors – to create unique letters, emails and social media posts that skew elected officials’ perceptions of constituent sentiment,’’ it states.
“Foreign governments could use AI to create co-ordinated and inauthentic influence campaigns that are designed to foster widespread misinformation, incite protests, exacerbate cultural divides and weaken social cohesion, covertly promote foreign government content, target journalists or dissidents and influence the views of Australians on key issues.’’
The eight-month Senate inquiry into the adoption of AI criticised AI developers including Google, Meta and Google for scraping people’s personal information from the internet to train their AI models.
The committee’s chairman, Labor Senator Tony Sheldon, said on Tuesday that the tech giants had dodged repeated questions about how they use people’s private data.
He said Amazon had refused to disclose how it uses data recorded from Alexa devices or published on Kindle.
Meta admitted that it scraped the profiles of every Australian Facebook or Instagram user going back to 2007, but refused to explain how it uses data from its supposedly confidential WhatsApp and Messenger services.
Google told the inquiry it could not detail how it uses copyrighted or private data because it needs to protect its own intellectual property.
Senator Sheldon said that general-purpose AI models, like CHATGPT, “must be treated as high-risk by default, with mandated transparency, testing and accountability requirements’’.
“If these companies want to operate AI products in Australia, those products should create value, rather than just strip mine us of data and revenue,’’ he said.
“Amazon, Meta and Google have already committed arguably the largest act of theft in Australian history by scraping the collective body of human knowledge and creative output, without consent or payment to the owners of that work.
“Where tech companies have scraped copyrighted data without consent or payment, the Government needs to intervene … and developers should be forced to fairly licence and pay for their work.’’
Senator Sheldon said workplace health and safety laws should be expanded to cover the use of AI.
“(Workers) shouldn’t be hired, rostered or managed solely by an algorithm or be subject to intrusive AI surveillance,’’ he said.
The committee called for stand-alone laws to “rein in big tech”.
“These tech giants aren’t pioneers; they’re pirates – pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed,” Senator Sheldon said.
“They want to set their own rules, but Australians need laws that protect rights, not Silicon Valley’s bottom line.”