NewsBite

Telcos, human rights organisation, tech giants at odds over need for new AI laws

The government says submissions to its AI inquiry, ranging from Telstra to the Human Rights Commission, have exposed a division as to whether specific AI laws are necessary.

Federal Industry and Science Minister Ed Husic. Picture: Gary Ramage
Federal Industry and Science Minister Ed Husic. Picture: Gary Ramage

A federal government report has exposed divisions within business, the tech sector and representative bodies over whether artificial intelligence laws are needed.

The government on Wednesday published the vast majority of submissions to its consultation on safe and responsible AI regulation and they show respondents such as Telstra, the Australian Human Rights Commission and tech giant Meta are at odds.

The government received more than 500 submissions to its AI inquiry, most of which were made public on Wednesday, and they reveal a split over the best path towards effective regulation.

Many submissions pointed to the EU’s proposed Artificial Intelligence Act as an example Australia should look to for guidance.

The EU’s AI Act, which would be the world’s first comprehensive AI law, would ensure AI systems are safe, transparent, traceable, non-discriminatory and environmentally friendly, and would be overseen by people, rather than by automation.

In a speech on Wednesday, Industry and Science Minister Ed Husic said that a lack of certainty around AI standards was preventing Australian businesses from deploying the technology.

“Most tech industry submissions concluded that updating existing laws would be more effective than introducing new laws specifically for AI developers and users,” Mr Husic told the Mindfields Automation Summit in Sydney.

“Other submissions noted that many of these existing laws have gaps.

“Consumer and human rights groups, and members of the public, tended to support new laws for AI developers and users.

“It’s not hard to imagine that governments will increasingly question the thinking of firms at the design phase of software and hardware development.

“The speed and scale of AI diffusion, particularly with the rise of generative AI, was raised by many responding as posing a distinct new set of challenges.”

The government in August extended its consultation period after receiving more submissions than it expected. Mr Husic said his department was working through the submissions regarding the best form of a regulatory framework for Australia.

“We will be continuing to explore forms new AI safeguards and regulation could take. We will be taking lessons from past technology rollouts and where we as regulators could have done better,” he said.

“From a risk-based framework, as originally proposed, versions of which are currently being pursued in Canada and the EU, to alternatives like a principles-based regulatory framework.

“This government is committed to boosting adoption of AI and automation technologies in Australia.”

Mr Husic pointed to a recent University of Queensland and KPMG Australia study that found only 40 per cent of Australians trust the use of AI at work, while 35 per cent think there are enough safeguards for AI.

“I know that we can design regulation providing a platform for innovation while protecting Australians – our communities and our national wellbeing,” he said.

“The root of this debate isn’t, should we regulate AI? It is, in what circumstances should we expect people and organisations developing and using AI to have appropriate safeguards in place?”

Open AI founder and chief executive Sam Altman in Melbourne. Picture: Arsineh Houspian.
Open AI founder and chief executive Sam Altman in Melbourne. Picture: Arsineh Houspian.

Some experts have warned about an automation-spurred job loss amid the rise of generative AI tools like ChatGPT, while others are concerned about a rise in socio-economic inequality and algorithmic bias caused by bad data.

In their submissions The Australian Human Rights Commission and the Council of Small Business Organisations in Australia called on the government to introduce specific new legislation to address the risks of artificial intelligence.

The Peter Thiel-founded AI firm Palantir said accountability measures needed to be built into the entire process of AI development – starting as early as the collection of training data by model developers – to help prevent potentially flawed, erroneous, biased, or otherwise unrepresentative or unsuitable data from becoming embedded into AI systems.

Meta said the government should “ensure AI regulation is principle based and adopts a pro-innovation, risk-based approach, focused on the uses of the technology and not the technology specifically”, while the Australian Federal Police said that investments in AI would propel its organisational capabilities including opportunities to create operational efficiencies, improve situational awareness to inform better human decision making, and minimise risks to the public safety and AFP members.

Meanwhile, Australia’s risk of a recession could rise if it doesn’t take full advantage of the opportunities presented by artificial intelligence, according to the Technology Council of Australia, which represents companies such as Atlassian, Canva, Blackbird and Telstra.

It comes after ChatGPT maker Sam Altman met with Mr Husic in June in Parliament House. Mr Altman is the chief executive and co-founder of OpenAI, the company behind the breakthrough AI chatbot.

According to the latest available data, ChatGPT currently has over 100 million users. And the website generated 1.6 billion visits in June 2023.

Mr Altman was one of the signatories to a recent statement that warned “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Additional reporting: Noah Yim

Read related topics:Telstra

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/technology/telcos-human-rights-organisation-tech-giants-at-odds-over-need-for-new-ai-laws/news-story/9eece69a54a871555fb841136447de3b