NewsBite

Government departments and agencies are divided on whether to let staff use AI tool ChatGPT

Every federal government department and agency was asked if it allowed ChatGPT. The response shows a divided bureaucracy.

ChatGPT is a language model-based chatbot developed by OpenAI. Picture: AFP
ChatGPT is a language model-based chatbot developed by OpenAI. Picture: AFP

An “alarming” split has been revealed within the federal public service over whether to use artificial intelligence tool ChatGPT, with Home Affairs, the Reserve Bank of Australia and Attorney-General’s Department among many that allow the chatbot.

The Coalition has seized on the Albanese government’s “inconsistent approach” on ChatGPT after opposition finance spokeswoman Jane Hume asked every department and agency if they permitted its use on their internal networks and what risk management they undertook.

Among the more prominent departments and agencies that have responded to Senator Hume, Social Services, the National Anti-Corruption Commission, Fair Work Commission and Services Australia do not allow the AI tool.

But those that do include the RBA, AG’s Department, Home Affairs, Administrative Appeals Tribunal, the Department of Parliamentary Services and the Department of Climate Change, Energy, the Environment and Water.

“The RBA allows employees to use ChatGPT, with less sensitive data only, through Microsoft Bing Chat for Enterprise,” the RBA says.

Home Affairs said staff could access ChatGPT “subject to approval”, and must ensure they don’t: “Access or allow access of any official information without the appropriate authority, a valid business reason, and a need to know; access, discuss or share official information via an unapproved messaging or collaboration application, and; disclose or allow the disclosure of any official information without the appropriate authority.”

But Fair Work Commission members and staff have been told they can’t use generative AI tools for work purposes.

“On entering a URL for all known and listed generative artificial intelligence tools, users are provided with a warning. The Commission actively monitors our network on an ongoing basis for use of generative AI tools,” the FWC states.

The divide shows how differently departments and agencies – with many still developing AI policies – have interpreted interim guidance released in November by the Digital Transformation Agency on public servants’ use of generative AI.

That advice states government employees should assume any information they input to tools such as ChatGPT could become public.

The DTA also advises not to input anything that could reveal classified, personal or otherwise sensitive information, and that employees should be able to explain, justify and take ownership of their advice and decisions.

Opposition home affairs and cyber security spokesman James Paterson said there were significant potential productivity benefits from generative AI tools such as ChatGPT but also major risks.

“The inconsistent approach adopted by the Albanese government is alarming, especially considering the privacy and cyber security threats these tools can pose,” he said.

“They urgently need to put in place a whole-of-government response before it leads to more compromises of the sensitive, private and personal information of Australians held by the government.”

Public Service Minister Katy Gallagher did not respond to questions.

Government sources pointed to the DTA’s whole-of-government guidance, noting Labor had established the AI in Government Taskforce.

The government is also considering more than 500 submissions received as part of consultations into a discussion paper on the safe and responsible use of AI, with a response due imminently.

David Batch, Privacy Practice Lead at cyber security firm CyberCX, said the DTA’s interim advice was a “good guardrail” but there needed to be specific standards developed for each agency.

“Beyond that, particularly as it belongs to information, security and privacy risks, there should be security and privacy impact assessments done of any implementation of AI in a government department. That is a current requirement under the federal government privacy code if it is accessing personal information,” he said.

Departments that haven’t responded to Senator Hume include Defence, Finance, Prime Minister and Cabinet, and Health and Aged Care.

Rosie Lewis
Rosie LewisCanberra reporter

Rosie Lewis is The Australian's Political Correspondent. She began her career at the paper in Sydney in 2011 as a video journalist and has been in the federal parliamentary press gallery since 2014. Lewis made her mark in Canberra after breaking story after story about the political rollercoaster unleashed by the Senate crossbench of the 44th parliament. More recently, her national reporting includes exclusives on the dual citizenship fiasco, women in parliament and the COVID-19 pandemic. Lewis has covered policy in-depth across social services, health, indigenous affairs, agriculture, communications, education, foreign affairs and workplace relations.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/politics/government-departments-and-agencies-are-divided-on-whether-to-let-staff-use-ai-tool-chatgpt/news-story/a9a206035dd61cd9175650b75c7eaeae