NewsBite

Blind spot: most boards lack expertise to handle AI risks and opportunities, says CSIRO report

A new CSIRO report highlights an urgent need for Australian directors to take these forthcoming risks seriously.

The AI practices of 53 Australian and globally listed companies were analysed in the report, which says that many local companies are only beginning to consider AI opportunities. Picture: AFP
The AI practices of 53 Australian and globally listed companies were analysed in the report, which says that many local companies are only beginning to consider AI opportunities. Picture: AFP
The Australian Business Network

Only four in 10 companies have a director well versed in AI ethics and most don’t have public AI policies, according to a new report by CSIRO and Alphinity highlighting an urgent need for boards to better grasp the threats and opportunities of AI.

The co-authored study by the national science agency and the fund manager resulted in an analysis framework combining ESG factors and AI ethics principles to help investors and companies manage AI risks and opportunities.

The AI practices of 53 Australian and globally listed companies were analysed in the report, which says that many local companies are only beginning to consider AI opportunities.

Banks such as Commonwealth Bank, ANZ and Westpac are the most advanced in their AI thinking.

“We see AI as a transformational technology, and it poses very significant risks to business if it’s not rolled out ethically,” said Alphinity head of ESG and sustainability Jessica Cairns.

“We have a relatively small pool of company directors servicing our larger businesses, and injecting some more specialist resources around not just AI, but more broadly, technology, is going to be really important over the next couple of years.”

ANZ was an example of a company that years ago identified the need to have stronger technology credentials at the board level and added new directors with the right experience as a result, she said.

Ms Cairns said boards needed to consider AI risks and opportunities seriously, including having subcommittees overseeing responsible AI matters and ­disclosures.

“We think it’s important that we move towards having some responsibility for AI at the board level, whether that’s through a ESG or audit subcommittee, or having dedicated reporting on AI strategy, (and ensuring) AI use incidents are going up to the board, etc,” she said.

Ms Cairns said the research highlighted that only a few businesses were starting to do that.

However, the majority of the reporting, she said, was “ad hoc” rather than structured and ongoing.

Boards, she said, needed a comprehensive understanding of the technical capability involved in AI opportunities and risks, ­ethical considerations, and the company’s overall strategic ­positioning.

She compared the need for responsible AI awareness among boards to the rising strategic focus on climate change risks over the past decade.

“We don’t think it has to be having somebody with a specific AI or data science background, but it’s at least having that broad knowledge in technology.”

The study found that 40 per cent of interviewed companies had internal responsible AI policies, but only 10 per cent of them shared them publicly. This should be a priority demand from investors, she said.

“It doesn’t have to be a stand-alone document, but having a policy position on the use of AI and how the ethical elements are governed is really important,” Ms Cairns said.

“That’s the No.1 thing that we want nearly all companies to be working towards.

“There needs to be some sort of positioning … (about) how they manage different use cases and whether they have red lines or specific escalation practices for high-risk use cases.”

The new framework identifies material environmental, social and regulatory risks around 27 AI use cases across nine key sectors, and then provides 10 governance indicators to assess key responsible AI actions and disclosures that company should prioritise.

The framework also contains 40 filterable questions about AI responsible practices to guide engagement with a company’s management team.

Data privacy emerged as the top ESG concern for companies managing AI, with human rights and modern slavery receiving less focus, according to the research.

With one of Australia’s AI ethics principles being to ensure AI “benefits individuals, society and the environment”, the focus on human rights is critical but underexplored in the AI space, the report says.

“The messaging from most companies today is that AI will augment people’s roles rather than replace roles. But I think in reality, it’s going to be a combination of the two … (and) I do think that the potential impact is maybe being underestimated or downplayed,” Ms Cairns said.

“And that is an area that we have identified that we need to circle back on with some of the companies that we spoke to, and make sure that this is an area that they are prioritising as well.”

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/technology/blind-spot-most-boards-lack-expertise-to-handle-ai-risks-and-opportunities-says-csiro-report/news-story/54285a220b9fc15c9f399379e8850c18