NewsBite

How Okta, Deloitte allow staff to use ChatGPT

Some companies are rolling out guidelines for what work staff can safely do with generative AI platforms, while others are not.

Letter signed warning about threat posed to humanity by AI

Australian companies are rolling out stringent guidelines for staff on the use of generative AI products, including ChatGPT and Google’s Bard, over fears the technologies could lead to data theft or major breaches.

The guidelines arrive as some companies have banned the use of the technologies while others have designated teams who have been given access to enterprise accounts and are developing internal reports on large language models (LLMs).

Some experts say it’s only a matter of time before a mass data breach will occur on the back of employees unintentionally volunteering sensitive data to LLMs in a bid to improve their work.

At security verification business Okta, there are two distinct policies for staff on the use of AI.

While ANZ managing director Phil Goldie wouldn’t share the exact details, The Australian understands employees are barred from sharing customer and confidential data with public generative AI such as ChatGPT and Bard.

Meanwhile, AI products developed internally are limited to being used and trained on materials that fall in line with company values.

The move toward building internal generative AI platforms to safeguard company IP is becoming increasingly common, with many companies over the past few weeks announcing the launch of in-house products.

Mr Goldie said Okta was of the view that the use of large language models increased the security risk of companies dramatically.

“A key security concern when using LLMs is the potential for data breaches, unauthorised access to confidential information, and biased and inaccurate information,” he said.

Mr Goldie said companies should move quickly to train staff on the safe use of the technologies. “With LLMs, acceptable use policies should be developed that clearly state which AI technologies are approved for use, what data types can be fed into these models, and who can provide guidance to employees. Okta has developed a global policy on the use of generative AI that provides our workforce with clear guardrails on the use of this technology.

“The obligations of staff using these models varies based on the type and sensitivity of the data submitted to the model, whether the use case is internal for productivity or external which impacts our customers, and whether they use public GenAI models available ‘as-a-service’ or open source models that can be owned and managed inside a company.”

At Deloitte guidelines have been developed surrounding what it believes is the ethical use of generative AI and which information staff can and cannot use.

“We encourage our people to be curious and learn more about the potential applications of generative AI in a safe and ethical manner,” a Deloitte spokeswoman said. “We have introduced clear guidelines for the use of generative AI covering topics like ethical use, confidentiality, transparency and validation of outputs.”

At software and network systems company Ciena, which has around 7000 global staff, employees have been warned against sharing company data with third-party AI platforms unless they had been vetted by the company.

Chief information officer Craig Williams said it was important companies recognised the risk of their customers and clients when using generative AI.

“We have reminded our employees that these tools must be used in a responsible manner that protects our information and intellectual property, as well as that of our employees, customers, and business partners,” he said.

CipherStash CEO Dan Draper said he believed it was only a matter of time before the world sees the first major data breach originating from ChatGPT.

He said there were numerous applications built on OpenAI’s API which were indirectly feeding ChatGPT data.

“How data is shared with an AI is often not obvious. There are a lot of services that use OpenAI’s APIs which will potentially feed ChatGPT that information as well,” he said. “It’s the wild west right now. Companies who are using AI in this way should be made to declare how they’re using this data. You can buy a box of Cornflakes and know how much sugar is in the box. Technology needs to get to a similar point with transparency.”

When it came to CipherStash staff, only open-source code was allowed to be used with generative AI products and no customer or employee information or financial data could be shared.

Originally published as How Okta, Deloitte allow staff to use ChatGPT

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.couriermail.com.au/business/how-okta-deloitte-allow-staff-to-use-chatgpt/news-story/644945dd2122148a551cc6504eb84339