‘High risk’: Industry Minister Ed Husic announces plans to ramp up AI regulation
As leading tech experts call for high-risk impacts of AI to become a “global priority”, Australia is seeking to ramp up regulation.
The federal government could consider a crackdown on facial recognition and other “high-risk” artificial intelligence technology if a consultation process considering how to safeguard the burgeoning sector finds it appropriate to do so.
Industry Minister Ed Husic on Thursday released two reports, aiming to strengthen rules governing responsible and safe use of AI.
Mr Husic said while there was considerable benefits to AI, there were significant risks that needed to be safeguarded.
It comes as hundreds of leading tech experts warn of a “risk of extinction” from AI if it goes unchecked and have called for it to be a global priority.
A statement, released by the Centre for AI Safety on Tuesday, has been signed by executives and academics who say the technology should be prioritised as a societal risk in the same class as pandemics and nuclear wars.
Mr Husic said there was clear concern in the Australian community “about whether or not the technology is getting ahead of itself” as he watered down options for self-regulation.
While he would not speak before the process was carried out, Mr Husic said if the consultation process brought up “high-risk areas” that needed a regulatory response, that would be considered by the government.
“We want people to be confident that the technology is working for us and not the other way around,” he said.
He said if facial recognition was being developed and used in ways that were “outside what the community think is acceptable, then clearly we will be taking a very deep look at that”.
Mr Husic said the government “obviously wasn’t starting from scratch”.
“Australia already has laws and guardrails in place, but in this discussion are they enough?” he asked.
Mr Husic said the Safe and Responsible AI in Australia paper canvasses existing regulatory and government responses both domestically and internationally, identifies potential gaps, and proposes options to strengthen the framework.
The National Science and Technology Council’s Rapid Response Report: Generative AI, meanwhile, assesses potential risks and opportunities in relation to AI that Mr Husic said provided a scientific basis for discussions about the way forward.
There will be an eight week consultation process before the government considers its next move.
“Governments have got a clear role to play in recognising the risk and responding to it, putting the curbs in place,” he said.
“Given the developments over the last, in particular, six months, we want to make sure that our legal and regulatory framework is fit-for-purpose, and that's why we’re asking people, either experts or the community, to be involved in this process, the discussion process, with the papers that we’ve put out, to let us know what their expectations are and what they want to see.
“We need the framework right, that people are confident that it’s working in favour or for the benefit of communities – it’s really important.”
He said using AI safely and responsibly was a “balancing act” that the whole world was grappling with.
It comes as the US and EU grapple with how to regulate AI tech advancements.
“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” he said.
“But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.
“We’ve made a good start … Today is about what we do next to build trust and public confidence in these critical technologies.”