NewsBite

Advertisement

This was published 1 year ago

The trouble with transparency when it comes to AI

By Paul Sakkal

The tech sector has cautioned the government over plans to give citizens the right to learn how artificial intelligence systems make decisions, such as to deny a bank loan or set insurance premiums, while AI experts say an in-principle agreement to allow individuals to be forgotten online could be unworkable for systems like ChatGPT.

Attorney-General Mark Dreyfus on Wednesday released the Albanese government’s response to a major review of privacy laws, which called for wholesale privacy protections.

The government plans to give citizens the right to learn how AI systems make decisions, like denying a loan, but the tech sector says that could be difficult.

The government plans to give citizens the right to learn how AI systems make decisions, like denying a loan, but the tech sector says that could be difficult.Credit: Bloomberg

Independents David Pocock, Kate Chaney and Zali Steggall criticised Labor for rejecting a suggestion to force political parties – which often send mass texts to voters – to comply with privacy laws. However, its agreement to remove an exemption for small businesses from the Privacy Act received backing from small business ombudsman and former Coalition minister Bruce Billson after lobby groups cautioned on the potential cost.

The government’s response committed to empowering Australians as they confront a society increasingly ruled by judgments of automated tools instead of humans.

It agreed to the recommendation that individuals should have a right to request meaningful information about how automated decisions are made after feedback to the review raised concerns about the transparency and integrity of such decisions.

“The information provided to individuals should be jargon-free and comprehensible,” it said, while also acknowledging huge productivity gains in healthcare, the environment, defence and national security as a result of the technology.

But Tech Council of Australia head Kate Pounder said providing transparency about scenarios in which AI was used – for example when ambulance drivers used Google Maps to find the quickest route – was not always in the public interest.

Kate Pounder, chief executive at the Tech Council of Australia.

Kate Pounder, chief executive at the Tech Council of Australia.Credit: Peter Rae

“There’s also emerging evidence that using the term AI can make people anxious, and using a different term could make people more comfortable,” she said, while stating her overall support for the government’s privacy stance.

Advertisement

“If our goal is to make better decisions and give good advice and care, is transparency always the answer?”

Loading

UNSW AI Institute chief scientist Toby Walsh said some firms used simple automated processes that made clear the steps and variables – such as age, income and gender – factored into decisions.

Other companies, he said, used more complex systems whose decision-making was more difficult, if not impossible, to explain.

The government also adopted a recommendation to force privacy policies to set out the personal information to be used in “substantially automated decisions”, though Pounder said the types of decisions that fell under this definition were unclear.

Another key recommendation of the privacy inquiry, to which the government has agreed in-principle, is the so-called right to be erased or forgotten by outfits such as Google and have certain items removed from search engines.

Walsh said researchers were probing how to force large language models such as ChatGPT, which hoover up online information, to forget personal information.

“It’s very problematic,” he said, raising questions about the applicability of privacy rights in the AI era. “We don’t have good answers yet.”

ChatGPT, which spawned global debate on AI’s profound potential as it rose in popularity, was briefly banned in Italy this year due to a suspected privacy breach. Its parent company, OpenAI, was forced to commit to give citizens a way to object to the use of their data to train the model.

CSIRO cybersecurity researcher Thierry Rakotoarivelo, who co-authored a paper on machine unlearning, said applying the right to be forgotten to systems like ChatGPT was much harder than it was for a search engine.

“If a citizen requests that their personal data be removed from a search engine, relevant web pages can be delisted and removed from search results,” Thierry said on September 11.

“For [large language models], it’s more complex, as they don’t have the ability to store specific personal data or documents, and they can’t retrieve or forget specific pieces of information on command.”

Labor also rejected a recommendation to force media companies to comply with privacy laws, a move the media sector claimed would harm press freedom.

Independent Zali Steggall has criticised Labor for rejecting a suggestion to force political parties to comply with privacy laws.

Independent Zali Steggall has criticised Labor for rejecting a suggestion to force political parties to comply with privacy laws.Credit: Alex Ellinghausen

Steggall earlier this year questioned media outlets’ publication of former Liberal staffer Brittany Higgins’ leaked texts, arguing it was an invasion of her privacy.

“The media is vital for strong democracy, and freedom of speech is very important, but it comes with responsibility and there’s a fine balance between that right and the right to privacy of individuals,” she said.

Cut through the noise of federal politics with news, views and expert analysis from Jacqueline Maley. Subscribers can sign up to our weekly Inside Politics newsletter here.

Most Viewed in Politics

Loading

Original URL: https://www.theage.com.au/link/follow-20170101-p5e8bf