NewsBite

A sit-down with King ChatGPT, Sam Altman

AI guru believes Australia is on ‘a good path’ on domestic regulation as government weighs next steps.

Co-founder and CEO of Open AI, Sam Altman, in Melbourne​. Picture: Arsineh Houspian/The Australian
Co-founder and CEO of Open AI, Sam Altman, in Melbourne​. Picture: Arsineh Houspian/The Australian

The US maker of ChatGPT, Sam Altman, says he is “more opti­mistic” for global governance of artificial intelligence and believes Australia is on a “good path” on domestic AI regulation, although he warned of an overly rigid regulatory framework.

“Australia’s a super important player in the region and in the world”, he told The Weekend Australian, adding: “I would be surprised if Australia does not build great AI companies; it seems like there’s great interest in building AI technology here.”

Mr Altman was in Melbourne on Friday for the last stop on OpenAI’s world tour, in which he visited cities including Washington, London, Paris, Brussels, New Delhi, Seoul and more.

He sat down with The Weekend Australian at the side of an event hosted by the Victoria-based Startup Network.

OpenAI chief executive officer Sam Altman speaks at Keio University last Monday. Picture: Tomohiro Ohsumi/Getty Images
OpenAI chief executive officer Sam Altman speaks at Keio University last Monday. Picture: Tomohiro Ohsumi/Getty Images

Mr Altman’s visit came just weeks after the government announced an eight-week consultation period into AI regulation.

One paper released for that regulation contains a section that asks for feedback on a “draft possible risk-based approach”, inspired by EU and Canadian moves, that would determine whether human assessment of AI systems was necessary before implementation.

Mr Altman struck a cautious tone about EU-style tiered risk ­assessments and urged a more dynamic “agency-based approach”.

Sam Altman - AI regulation in Australia is on a good path

“I think it’s going to be very hard to write down ‘here is every potential harm, the level that it’s at, and what you have to do to mitigate it’,” he said.

“Maybe we need new agencies in different countries, given the speed with which this is moving so that we can write the principles, but then have the exact limit and tests evolve over time.

“When people first started thinking about these systems, they said, a friendly AI chatbot, no problem, that’s very low risk.

Letter signed warning about threat posed to humanity by AI

“I think we’re seeing now that could be one of the highest-risk categories, if you think about the ability of persuasion of the latest systems to be used to influence someone, and I think in most people’s risk-based tiers, that was low.”

Reflecting on his month-long whirlwind tour, he said he was “way more optimistic” about global governance of AI than before he had set out.

“I really came into this with no idea about what the level of receptivity was going to be to that – whether people took (artificial general intelligence) seriously at all and whether people thought it was something that warranted a global effort as a response,” he said.

AGI refers to the hypothetical development of such that it is intellectually on par with humans.

“I would have guessed, pretty not,” he said. However, he added his conversations around the world showed receptiveness to the idea, as did Industry and Science Minister Ed Husic, with whom Mr Altman met on Friday morning.

Mr Husic told The Weekend Australian he had a “constructive and positive” 40-minute conversation with Mr Altman.

“He agreed that they would make a submission to the consultation process,” he said.

Industry and Science Minister Ed Husic. Picture: NCA NewsWire/Martin Ollman
Industry and Science Minister Ed Husic. Picture: NCA NewsWire/Martin Ollman

“I also put to him, I was very keen for our researchers and scientists to be able to have access to their models, particularly ahead of the release of new generations of ChatGPT (and) he has given me a commitment that that will be made possible.

“I think understanding the way these models operate is really important … trying to think ahead about potential risks or applications … in ways we don’t necessarily support is really important.”

Mr Altman said “countries really wanted to co-operate, really wanted to think about what a framework might be”.

Mr Altman was one of the signatories to a recent statement that warned “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

He also suggested that the advent of generative AI may need to bring about a new economic compact about copyright and intellectual property.

He said the concept of tracing back which of the training data was used to create an output was an “open research question”.

“I can’t tell you we know exactly how to do it, but we think it should be roughly possible.”

Noah Yim
Noah YimReporter

Noah Yim is a reporter at the Sydney bureau of The Australian.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/a-sitdown-with-king-chatgpt-sam-altman/news-story/a5cbc9aa387412082e8a8b67aa3fdbd8