‘A cheap magic trick’: Amazon, Google, Meta accused of dodging Senate questions
By David Swan
Artificial intelligence chatbots ChatGPT, Google’s Gemini and Meta’s Llama should be deemed “high-risk” and subjected to mandatory transparency, testing and accountability requirements, a Senate inquiry has found, with US tech giant executives facing heavy criticism for dodging its questions.
Chatbots are skyrocketing in popularity throughout schools and workplaces, but the bipartisan Senate committee found that they present significant risks, and the companies building them have failed to answer key questions about the extent to which sensitive data is being used.
It also found the chatbot developers have committed “unprecedented theft” against Australia’s creative workers, using copyrighted materials to train their models without permission or payment.
After nine months of hearings, the committee handed down its final report, issuing 13 recommendations including sweeping EU-style legislation that would introduce guardrails against high-risk AI use cases across the economy.
Its report specifically criticised executives from tech giants Amazon, Google and Meta for their appearances. The executives consistently refused to be transparent and answer questions, particularly about the user data scraped to train the tech giants’ large language models.
“When asked about how they use [personal or private] data to train their AI products, the platforms gave largely opaque responses,” the report says. Amazon’s head of public policy Matt Levey was asked whether audio captured by Alexa devices in people’s homes has been used to train Amazon’s AI products, but did not directly answer the question.
“This refusal to directly answer questions was an ongoing theme in the responses received from Amazon, Meta and Google,” the report said.
“All three companies repeatedly referred the committee to their privacy policies and terms of use as justification for the use of some user data to train their AI products. A recent study found that it would take an Australian 46 hours a month on average to read every privacy policy they encounter.”
As this masthead reported at the time, an AI chatbot interjected during a Google executive’s testimony, causing senators to ask whether artificial intelligence had been used to help prepare her responses. The executive, Lucinda Longcroft, denied that was the case.
“Watching Amazon, Meta, and Google dodge questions during the hearings was like sitting through a cheap magic trick ... Plenty of hand-waving, a puff of smoke, and nothing to show for it in the end,” said the committee’s chair, Labor Senator Tony Sheldon.
“These tech giants aren’t pioneers; they’re pirates – pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed.
“They want to set their own rules, but Australians need laws that protect rights, not Silicon Valley’s bottom line. We need new standalone AI laws to rein in big tech and put strong protections in place for high-risk AI uses, while existing laws should be amended as necessary.”
The committee recommended a sweeping new “AI act” to regulate the technology, a concept supported by Human Rights organisations and media companies, but criticised by tech companies including Atlassian and Australia’s big four banks as going too far.
That approach would bring Australia into line with the EU, Canada, the UK, and some US states including Colorado, and would potentially ban facial recognition tools like the one used by Bunnings recently found to breach privacy laws.
The committee also found that creative workers, including voice actors, writers and cartoonists, are at the most imminent risk of AI severely impacting their livelihoods. It recommended AI developers be transparent about the use of copyrighted materials, and licence and pay for that work.
More than 36,000 creatives, including actor Julianne Moore, author James Patterson and Radiohead’s Thom Yorke, recently signed an open letter urging the prohibition of using human art to train AI without permission. The Australian Association of Voice Actors provided evidence at the hearing that their contracts effectively granted rights to Amazon to use their voices to generate audiobooks through AI, a move the group said might cost thousands of Australian jobs.
“It’s heartening to see the Senate recognise what creators have been saying, that these AI products work by stealing from artists,” Australian Writers’ Guild chief executive Claire Pullen said.
“We are pleased to see such strong recommendations recognise our contributions to Australia’s economy and national culture. It shouldn’t take a Senate committee to tell tech companies not to steal from thousands of Australians.”
Industry and Science Minister Ed Husic said he is considering the report.
Greens senator David Shoebridge said the report was a “useful starting point”, particularly for recognising workers in creative fields are routinely being exploited by AI.
“But it’s disappointing the final report doesn’t bring Australia in line with global leaders like the UK and Europe by recommending a national strategy on AI,” he said.
Independent senator David Pocock said the report was valuable, but the government had been too slow to respond to the impact of AI.
“I want to see a commitment to introduce legislation to establish mandatory guardrails for the safe and responsible use of AI in high-risk settings in the next sitting period,” Pocock said. “I also call on the government to prioritise the passage of truth in political advertising legislation, including a ban on the use of AI in electoral matter, ahead of the next election.”
Liberal Senator James McGrath said Labor had been caught “napping” on AI.
“The Coalition believes the regulation of AI poses as one of the 21st century’s greatest public policy challenges, nevertheless we believe AI policy framework should safeguard Australia’s national security, cybersecurity, and democratic institutions without infringing on the potential opportunities that AI presents in relation to job creation and productivity growth,” he said.
“Since the public release of ChatGPT two years ago, the threats to our sovereignty are clear and in the public domain for all to see. Yet this federal Labor government has failed to take any action to deal with these growing threats to our cybersecurity, intellectual property rights and national security.”
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.