Businesses given guidance but not rules on AI
Labor will fall short of forcing businesses to declare when they use AI, instead issuing ‘guidance’ encouraging the use of visible labels on images and videos generated by the technology.
Labor will fall short of forcing businesses to declare when they use artificial intelligence, instead issuing “guidance” encouraging the use of visible labels on images and videos generated by the technology.
The guidance is part of the National AI Plan to be released on Tuesday, as the Albanese government moves to deliver on productivity-enhancing initiatives recommended from the economic reform roundtable.
The guide for businesses, content creators and AI developers recommends clear labelling when AI is used and the source for the content. It also recommends embedding visible or invisible information about the source of AI content through watermarking, and chronicling more descriptive information through metadata recording.
The voluntary nature of the labelling is against what was endorsed by the ACTU’s national congress last year, when unions called for all AI-produced content to be watermarked or labelled “in a format that is clear, accessible, and permanently embedded”.
Industry Minister Tim Ayres said businesses should be transparent about when AI was used.
“AI is here to stay. By being transparent about when and how it is used, we can ensure the community benefits from innovation without sacrificing trust,” Senator Ayres said.
“That’s why the Albanese government is urging businesses to adopt this guidance. It’s about building trust, protecting integrity, giving Australia’s confidence in the content they consume.”
When the Albanese government previously floated the idea of requiring disclosure of the presence of AI or AI-generated content in high-risk settings, the ACTU said it was concerned this would be too “narrow”.
The media industry union – the Media Entertainment and Arts Alliance – was even more demanding. “MEAA is strongly concerned about the narrow application of the requirement for labelling to high-risk AI only,” the union said in 2023.
“We believe labelling should apply to all content generated by AI, including those applications not designated ‘high-risk’. This is necessary to ensure that, as a general rule, audiences can be informed about whether content – a film, television show, media article or musical composition – has been generated by AI.” Earlier this year, the MEAA said safeguards “must be introduced”.
The long-awaited National AI Plan will include three pillars: ensuring Australia has the policy settings to encourage investment; ensuring impacted workers are supported and reskilled; and promoting responsible use of the technology.
The Australian reported last week the government was likely to reject the need for a stand-alone AI act and would pursue a light touch to regulating the technology in an aim to become a leading destination for investment in the Asia Pacific.
In a departure from the approach recommended by former industry minister Ed Husic, regulations on AI would be implemented on a case-by-case basis, in line with advice from a new AI safety institute that stands within the Department of Industry, Science and Resources.
While unions had previously been strident in their demands for an AI act, ACTU assistant secretary Joseph Mitchell last week suggested the new AI safety institute made a stand-alone legal framework unnecessary.
Curtin University AI expert Alex Jenkins said Australia needed a “light-touch” framework that would allow consumers to appeal adverse decisions that had AI involvement. “There simply needs to be a mechanism for consumers, if they feel they have been impacted by the use of AI, that they know AI has been part of some decision-making process involved in their life and they have some form of remediation,” he said.
“I think trying to ban use cases upfront is the wrong approach. I think blanket legislation is the wrong approach. The use of AI in the automotive sector is going to look very different to the use of AI in the healthcare sector, which is going to look different to AI in the financial sector.”
There have been attempts to regulate AI labelling or watermarking. The EU’s AI act requires AI providers to “ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated”.
There has also been an industry push for labelling AI content. The Coalition for Content Provenance and Authenticity – led by the maker of Photoshop, Adobe – counts among its members some of the world’s leading AI developers, including Amazon, Google, Meta, Microsoft, and OpenAI.
Watermarking AI content – such as a sentence generated by ChatGPT – is a technically difficult problem AI manufacturers say they have not solved. OpenAI says it has developed a method but it is easy to circumvent.
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout