NewsBite

Anthony Albanese moves to safeguard Aussies against AI threats

Anthony Albanese is preparing sweeping regulations on artificial intelligence amid rising global concerns that AI could lead to human ‘extinction’.

Search giant Baidu's lacklustre unveiling of its chatbot exposed gaps in China's race to rival ChatGPT, as censorship and a US squeeze on chip imports have hamstrung the country's artificial intelligence ambitions.
Search giant Baidu's lacklustre unveiling of its chatbot exposed gaps in China's race to rival ChatGPT, as censorship and a US squeeze on chip imports have hamstrung the country's artificial intelligence ambitions.

Anthony Albanese is preparing sweeping regulations on artificial intelligence amid rising global concerns from tech companies and foreign governments that the AI technology explosion could lead to human “extinction” and damaging disinformation.

Industry Minister Ed Husic on Thursday will release two reports laying the ground to fast-track laws and regulations strengthening domestic rules governing the safe and responsible use of AI.

The government response comes as the US and European Union grapple with how to regulate AI tech advancements and mitigate perverse outcomes across defence, education, legal, political and social institutions.

In a statement signed by almost 400 leading tech chiefs and academics, including OpenAI chief executive Sam Altman, whose company operates the controversial ChatGPT, the US-based Centre for AI Safety said “mitigating the risk of extinction from AI” is a global priority.

Tesla founder Elon Musk, Apple co-founder Steve Wozniak and other tech leaders have also called for a “pause” on the development of powerful AI systems.

A rapid response information report on generative AI, prepared by the National Science and Technology Council for the Prime Minister and senior cabinet ministers, outlines serious “risks” Australia must confront.

The report, led by Chief Scientist Cathy Foley, raises concerns about domestic “capabilities, capacities, investments and regulatory frames” in managing the rise of AI. It addresses the impact of AI on automation and jobs, trust in democratic systems, private and public organisations, authentication of information and breaches of privacy.

Mr Husic said the government’s Safe and Responsible AI in Australia discussion paper kickstarts an eight-week consultation process to strengthen regulatory frameworks, identify gaps and ­ensure governance mechanisms are fit for purpose. The government is assessing existing AI safeguards covering consumer, corporate, eSafety, criminal, administrative, copyright, intellectual property, health and privacy laws.

“Using AI safely and responsibly is a balancing act the whole world is grappling with at the moment. The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” Mr Husic said.

“But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.”

Industry Minister Ed Husic says there “needs to be appropriate safeguards to ensure the safe and responsible use of AI”. Picture: Martin Ollman/NCA NewsWire
Industry Minister Ed Husic says there “needs to be appropriate safeguards to ensure the safe and responsible use of AI”. Picture: Martin Ollman/NCA NewsWire

With Australia one of the first countries to adopt artificial intelligence ethics principles, Mr Husic said “building trust and public confidence in these critical technologies” underpins the government’s regulatory approach.

Australia’s AI strategy currently focuses on a broad set of general regulations, sector-specific regulation (airline, food and motor vehicle safety, therapeutic goods and financial services) and voluntary or self-regulation initiatives.

While backing benefits of productivity-enhancing AI technologies, the 41-page discussion paper warns of harmful uses including generating deepfakes to influence democratic processes, creating misinformation, encouraging people to self-harm and racial discrimination.

“Inaccuracies from AI models can also create many problems. These include unwanted bias and misleading or entirely erroneous outputs such as ‘hallucinations’ from generative AI,” it said.

Schools in NSW, Queensland, Western Australia and Tasmania have already banned the use of ChatGPT, a chatbot tool launched last year that generates human-like text and allows people to write essays, code, emails and reports.

OpenAi chief executive Samuel Altman – who operates ChatGPT – gives testimony before the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law last month. Picture: Win McNamee/Getty Images/AFP
OpenAi chief executive Samuel Altman – who operates ChatGPT – gives testimony before the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law last month. Picture: Win McNamee/Getty Images/AFP

The NSTC report said “the current ‘ChatGPT moment’ is provoking public conversation about the role AI should have in Australian society”.

“The rapid expansion of ChatGPT shows the potential of generative AI technologies is difficult to predict over the next two years, let alone 10. The concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potential risks to Australia,” the council report said.

Australian National University vice-chancellor Brian Schmidt, an NSTC member, said AI is “going to be extraordinarily hard to regulate” but Australia “has a chance to get out in front”.

“There could be a fair bit of negative stuff that happens because of it. I think anyone who thinks they’re going to easily regulate what’s going to happen over the next three or four years is wildly optimistic,” Professor Schmidt told The Australian.

“I think the worst thing about it for me in the short term is that it is going to make it very difficult for the average person to know what is real.

“We’re going to be, pretty soon, in a position where nothing without some sort of certification is going to be believable.”

UWA emeritus professor and NSTC member Cheryl Praeger said the situation was “fast-changing and uncertain”, and raised concern about the “ability for bad actors to create misinformation” with AI.

“I’m a former academic and so I’ve been … paying a lot of attention to what’s been happening in the education sphere,” Professor Praeger said.

Griffith University associate professor and NSTC member Jeremy Brownlie said “sensible regulation is always a good thing” and rejected moves to abolish AI.

Read related topics:Anthony Albanese

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/politics/anthony-albanese-moves-to-safeguard-aussies-against-ai-threats/news-story/737417ef804ab4e2328599ed135feb94