NewsBite

Legal danger to directors of taking the eye off AI

Directors face commercial, reputational and regulatory damage if they don’t have enough oversight of their company’s handling of artificial intelligence, says the AICD.

AICD managing director and chief executive Mark Rigotti. Picture: Aaron Francis
AICD managing director and chief executive Mark Rigotti. Picture: Aaron Francis

Directors could face “significant commercial, reputational and regulatory damage” if they do not have enough oversight of their company’s handling of artificial intelligence, the Australia Institute of Company Directors says.

“Current research suggests that there is generally limited board oversight of artificial intelligence,” AICD chief executive Mark Rigotti says in a new guidebook for company directors.

He said the application of artificial intelligence in corporate Australia was “often subject to inadequate controls and risk oversight”.

“In many cases, directors and senior executives are unaware of where in the organisation’s value chain artificial intelligence is being used.

“If left unaddressed, this risks significant lost opportunities and commercial, reputational and regulatory damage, with regulators increasingly focused on regulating artificial intelligence harms.”

The AICD handbook tells directors that they need to understand how their company is using artificial intelligence, suggesting that they create an inventory or a register of AI use within their company, including where it is incorporated in existing cyber and tech products.

It says company directors should look at the key risks arising from AI use as well as the potential benefits of AI within their organisation and develop policies and processes.

It recommends that boards and senior executives be given briefing sessions on artificial intelligence so they have a “baseline understanding” of the technology.

It also recommends that boards take action to verify whether the data which is being used for artificial intelligence processes within their organisation is “fit for purpose”.

This includes understanding what data their AI draws on, considering where and how it is stored, used and accessed, and making sure there are adequate privacy and security protections in place for the data.

Mr Rigotti said AI had the potential to offer “significant productivity and economic gains”.

But he said that “alongside these benefits lie potential risks from AI system failures” as well as potential abuse, including potential misuse of personal data, “algorithmic discrimination and poorly controlled automated decision making”.

He said managing the potential risks and reward required a “robust corporate governance framework that can adapt to the unique characteristics of AI systems”.

The federal government this year committed has to introduce a range of initiatives to support the use of safe and responsible AI.

These include consideration of the introduction of mandatory guard rails for AI use in high-risk settings, considering labelling and “watermarking” of AI in high-risk settings, and clarifying and strengthening laws to address potential AI damage.

Glenda Korporaal
Glenda KorporaalSenior writer

Glenda Korporaal is a senior writer and columnist, and former associate editor (business) at The Australian. She has covered business and finance in Australia and around the world for more than thirty years. She has worked in Sydney, Canberra, Washington, New York, London, Hong Kong and Singapore and has interviewed many of Australia's top business executives. Her career has included stints as deputy editor of the Australian Financial Review and business editor for The Bulletin magazine.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/companies/legal-danger-to-directors-of-taking-the-eye-off-ai/news-story/ee92760af8e37f87e51614b7d119ed83