Google admits revising pro-voice AI chatbot, calls for amendments to proposed risk-based regulation
The tech giant says it doesn’t want to influence politics and shared its submission to the government’s AI regulation consultation.
Google says it wants its systems to be neutral on the voice to parliament and doesn’t want to influence politics with its generative AI tools.
Furthermore, Google has called for amendments to a risk-based regulatory approach to AI regulation floated by the government.
“We don’t want these systems to be out there influencing politics,” Google Australia’s government affairs and public policy director Alex Lynch told The Australian.
“We want to make sure that that is tightly controlled.
“But at the same time, we can‘t go through and audit every single response. It’s probably inappropriate for us to go and audit every single response.”
Mr Lynch was speaking about an article in The Australian in May this year that found Google’s newly-launched ChatGPT competitor backed the voice to parliament and praised left-wing politicians while calling their counterparts on the other side of the aisle “controversial”.
“It could just be that the corpus of data that was out there on the web had a particular skew to it,” Mr Lynch explained. “Now, that‘s why we have to look at that, where we have a process for looking at these sorts of policy interventions that we might need to make. And, you know, articles like [The Australian’s] are useful because they call out some of the issues that are arising and allow us to look at those in more detail.”
Google changed Bard’s response the day of publication and changed the chatbot which from then said it could not respond to questions about the voice to parliament.
“You saw the system change, we do look at these things as they go up,” Mr Lynch said.
At the time, Google did not admit it had specifically responded to the report.
Google Australia also shared its submission to the Australian government’s AI regulation consultation which warns of a “loss of local talent and investment” in AI due to legal uncertainty in its copyright framework. It further warns of “risks emerging in data governance”.
On the other hand, the tech giant has called for amendments to, but has broadly backed a “possible draft risk management approach” floated in the discussion paper published by the government when it opened its AI regulation consultation period.
The model foresees different levels of regulatory scrutiny placed on AI systems based on the level of risk they pose.
Mr Lynch – who said he was the “key drafter” of the submission – said the government should also consider opportunity cost in considering AI risk.
“An assessment should acknowledge the opportunity costs of not using AI in a specific situation or of intentionally developing AI without particular capabilities,” the submission reads.
“If an imperfect AI system is shown to perform better than the status quo at a crucial life-saving task, for example, it may be irresponsible not to use the AI system.”
Mr Lynch agreed this was an “extra element” to the understanding of risk proposed in the government paper and in the EU regulation that the idea was largely modelled on.
“You can see the British approach, they took this approach where they specifically called out the foregone benefits as something that the risk assessment should consider,” Mr Lynch said. “And we think that’s appropriate for Australia as well because it means that then you look at, are we creating a bigger problem for ourselves by specifically not using the technology than we would if we used the technology even in that risky scenario?”
Furthermore, the submission also calls for a consideration of the likelihood of harm as well as a system’s risk.
Mr Lynch said this would be Google’s only submission to the process and that no other confidential submissions would be tendered.
The submission broadly calls for AI regulation to be “consistent with innovation” to “ensure that Australia maximises its local capacity and secures a significant place in the global AI ecosystem”.