NewsBite

Advertisement

This was published 1 year ago

Government may force companies to label AI content to prevent deep fakes

By Matthew Knott

Companies may be forced to clearly label content generated by artificial intelligence to stop the public from being fooled by deep fake images and videos, as well as lifelike robotic recordings.

Industry Minister Ed Husic, who is overseeing the federal government’s efforts to regulate artificial intelligence (AI), met on Friday with Sam Altman, the chief executive of ChatGPT’s parent company, OpenAI, at Parliament House.

Industry Minister Ed Husic is considering ways to regulate the rapidly advancing field of artificial intelligence.

Industry Minister Ed Husic is considering ways to regulate the rapidly advancing field of artificial intelligence.Credit: Dion Georgopoulos

The pair discussed ways to minimise the potential negative effects of the rapidly advancing technology.

Husic said Altman expressed an openness to the idea of mandatory labelling of AI-generated material, an idea included in a bill passed by the European Parliament this week and awaiting final approval.

“The use of generative AI to create images that are lifelike but don’t reflect reality, the use of chatbots levered off generative AI where people may not be aware of the fact they’re talking to tech rather than a human – these are serious issues,” Husic said in an interview.

Loading

“I’ve raised, in light of what the EU has proposed this week with its draft laws, the labelling of AI-generated product so people have confidence about what they’re dealing with.

“OpenAI have indicated a willingness to consider that and it’s something we’re thinking about as well.”

Husic said he was pleased that Altman agreed to give Australian scientists and researchers access to OpenAI’s models, including future versions of the ChatGPT chatbot, so they could better understand how the technology worked.

Advertisement

“This is what the US and UK have asked for and I want to make sure we keep pace with that,” he said.

However, Husic expressed doubt about Altman’s suggestion of a global AI regulator, saying it was important for countries such as Australia to retain sovereignty over tech regulation.

“They talked about setting up an overarching international agency, and while clearly we are keen to work with other countries I haven’t necessarily signed on to that at this stage,” he said.

“We need to be able to deal with the here and now and there are clearly concerns about whether the technology is getting ahead of itself.

“I’m not an evangelist and I’m not a catastrophiser when it comes to AI.

Loading

“I recognise its future benefits, but clearly the community wants us to think about risk and that’s what we’re going to do.”

Husic said he could not envisage banning ChatGPT in Australia unless particular functions of the technology were deemed to be “high risk”.

The government earlier this month released a discussion paper on options to regulate AI and launched an eight-week consultation process.

Altman, who is visiting Australia for speaking engagements, last month told the US Congress: “If this technology goes wrong, it can go quite wrong.”

“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” Altman said.

He added that regulatory intervention by governments could help mitigate the dangers posed by the technology.

Cut through the noise of federal politics with news, views and expert analysis from Jacqueline Maley. Subscribers can sign up to our weekly Inside Politics newsletter here.

Most Viewed in Politics

Loading

Original URL: https://www.smh.com.au/link/follow-20170101-p5dh8r