NewsBite

Work crackdown on ChatGPT, Bard begins over fears employees may share sensitive data

Cyber security experts say employees using ChatGPT shouldn't get too comfortable as companies are expected to ban the technology at work over fears of mass data breaches.

Cyber security experts say companies are banning the use of LLMs in the workplace. Picture: AFP
Cyber security experts say companies are banning the use of LLMs in the workplace. Picture: AFP

Cyber security experts say companies are moving to stop employees from sharing sensitive company information with Large Language Models Bard and ChatGPT in order to head off mass data breaches.

The move arrives just one week after Google released its LLM to Australian users and it was found to have an “opinion” on political leaders and other contentious issues.

Some cyber experts say they’re already seeing a number of clients banning the use of LLMs in the workplace and that companies were reminding employees about company data policy.

Concerns have only partly been allayed over employees linking sensitive workplace information to their personal accounts because unlike ChatGPT, Bard does not yet offer an enterprise service limiting interactions with the service to personal Gmail accounts.

Aidan Heke, the chief executive of data and analytics consultancy Decision Inc. Australia, said that until more was known about the algorithms behind prominent LLMs, companies needed to be cautious.

“It opens up all sorts of data risks inside an organisation. Those data risks are pretty significant because it starts to enable external data access and use inside the enterprise in a relatively unconstrained way,” he said.

Decision Inc. Australia chief executive Aiden Heke.
Decision Inc. Australia chief executive Aiden Heke.

“There are guardrail conversations going on in the industry at the moment and I think everyone’s quite scared as we don’t understand these algorithms.”

Mr Heke said companies not already considering the potential risk from LLMs would soon realise after the first few major incidents took place.

“It’s only going to take one or two significant data breaches with these core technologies to get boards really concerned about what’s happening on the ground,” he said.

ManageEngine research director Ramprakash Ramamoorthy said one of the biggest concerns was that Bard might be providing Google employees and the company’s contractors access to information shared with it.

Google did not deny this claim, but said it had automated a service to try to stop Bard from training itself on personal data.

“We take your privacy seriously. To help Bard improve while protecting your privacy, our systems use automated techniques to remove personally identifiable information,” a spokeswoman said.

OpenAI’s ChatGPT is training itself on the information input from personal accounts. Picture: AFP
OpenAI’s ChatGPT is training itself on the information input from personal accounts. Picture: AFP

“People want to know that they are safe and secure online, which is why for years we’ve focused on building products and infrastructure that respect and protect people’s privacy – from individuals to enterprises.”

Mr Heke said it was important that employees understood that if they used personal accounts with LLMs for work purposes, Google would compile their data.

“Google treats you as one single entity so if you use your personal account for work queries and don’t adjust your security profile within Google, they’ll see everything,” he said.

Mr Ramamoorthy said questions about whether companies should ban staff from using LLMs at work were increasingly common. “Ignoring LLMs would be tantamount to ignoring Google 20 years ago,” he said.

“Though OpenAI doesn’t train itself on the data input from enterprise accounts, it does with personal subscriptions,” he said.

“This means any data input from employees’ personal accounts could be accessed, in both good ways and bad, by future users of the tool.

“Google Bard has no enterprise plan. This means any data run through the tool is accessed by Google’s employees, and potentially contractors, and it’s being used to refine the model.”

Most of ManageEngine’s clients had already banned the use of LLMs, however Mr Ramamoorthy said a better approach would be to ensure companies prioritised the use of enterprise accounts where safety guard rails are a default of the service.

A recent survey from Decision Inc had shown about 40 per cent of staff were testing the use of ChatGPT at work.

Mr Heke said many employees were hesitant to admit whether they were using LLMs but he anticipated that in some organisations it would be more than 50 per cent of staff.

He said poor management of LLMs in the workplace could result in enterprises missing out on the productivity benefits they could have.

“I think there’s going to be this dawning sort of realisation where organisations are going to be concerned about the benefits that are arising from these efficiencies that might not be seen inside the enterprises themselves,” he said.

“The risk is that you’re going to end up with staff who are potentially not as busy as they could have been, and that the efficiency from LLMs doesn’t necessarily turn up.”

Joseph Lam
Joseph LamReporter

Joseph Lam is a technology and property reporter at The Australian. He joined the national daily in 2019 after he cut his teeth as a freelancer across publications in Australia, Hong Kong and Thailand.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/technology/work-crackdown-on-chatgpt-bard-begins-over-fears-employees-may-share-sensitive-data/news-story/96bdeb5927c62d5203716d06ddcd67c7