AI inquiry: Australian privacy laws decades out of date, redress, reform needed
A prominent human rights lawyer says lagging, decades-old laws have left Australians exposed to a rapidly advancing phenomenon.
A prominent human rights lawyer says Australia’s “decades out of date” privacy laws need to be urgently reformed to protect against the AI industry’s rapid advancements while also calling for tough penalties on companies that do the wrong thing.
Speaking at a parliamentary inquiry on adopting artificial intelligence, Digital Rights Watch founder Lizzie O’Shea said the legislative lag had allowed “data extractive business models” to expand at a “significant expense to society”.
She said laws that were drafted “decades ago” had a narrow scope of what counted as “personal information” and didn’t capture details like geolocation.
“Fair consent” must also be considered so companies didn’t’ repurpose information collected years ago for the development of algorithms or tools that “were not even imagined at the time”, Ms O’Shea told the committee.
“That strikes me as an unreasonable and unfair use of that information without the further requirement to obtain consent,” she said.
Governments should also establish a regulatory body to punish companies that breach rules, including “enforceable prohibition on unfair treatment”, and avenues that would allow an individual to sue a dodgy business, Ms O’Shea said.
“We think regulatory intervention should focus on transparency and greater oversight and auditing of algorithms, and we would say for high-risk applications moratoriums as well,” she said.
“While the parliament leaves this reform unimplemented is exposing vulnerable Australians to bad actors within a rapidly developing field.”
Ms O’Shea also highlighted the “grave potential” for social media algorithms to undermine democratic institutions and practices, with politicians warning against social media misinformation in the wake of Donald Trump’s attempted assassination.
“I think there are ultimately problems of data-extracted business models because misinformation and disinformation is content that is extremist that goes viral, because social media platforms have an interest in keeping people on devices,” she said.
“That’s part of their business model to continue extracting data, continue advertising to people.
“Recommender systems that allow for that kind of extremism to occur I think are dangerous. They are problematic for the purposes of having a discussion about our social democracy, for having a place in which we overcome social divisions in online spaces (and) for public participation.
Australia’s largest media union, the Media, Entertainment and Arts Alliance (MEAA), is demanding federal laws that will require companies and people to alert creators and seek their permission if businesses are using their content to train AI algorithms.
The union is also calling for a tax on businesses that replace workers with AI tools.
MEAA campaigns director Paul Davies said while the union supported technological development, there needed to be “fair compensation” for content.
“There needs to be a economy-wide way of reclaiming the productivity benefits that AI may or may not achieve, alongside measures to limit and drive out the exploitation and the theft,” he told the inquiry.
“So wherever AI can be used in a socially acceptable way, in an industrially acceptable way, and where it produces productivity benefits, working people need to be given a slice of that.
“They need to gain from that productivity.”
While not speaking specifically about media and creative industries, Ms O’Shea said there was a “real opportunity” to establish requirements and protections while the technology was still new.
“It’s imperative that management, whether it’s a government service or a private sector organisation, considers and engages with workers early rather than impose these tools on workers and then either being surprised that it doesn’t work or being surprised that people feel exploited,” she said.