NewsBite

We need a more open debate on AI and ethics in the boardroom

Businesses must balance the benefits of AI with the ethical responsibilities around the impact of a disruptive technology.

As the adoption of AI accelerates, boards must consider the ethical issues around it.
As the adoption of AI accelerates, boards must consider the ethical issues around it.

With spending on artificial intelligence forecast to reach $57.6 billion in 2021 — up from $12bn last year — it’s no surprise that nearly 85 per cent of executives believe AI will bring competitive advantage to their organisations. However, as the adoption of AI accelerates, the ethical considerations around jobs disruption, transparency, privacy and liability remain hotly debated.

At a recent business roundtable facilitated by the Trans-Tasman Business Circle and IBM, I saw up close a polarised debate among 15 executives representing the boards of some of Australia’s largest organisations, as well as leading academics and business leaders. Whether AI is as insidious a technology as Elon Musk claims or a force for good, as an ethicist I see several critical challenges emerging.

It’s here, and it is now

Despite the rapid acceleration of the technology, the Australian business community and government still talk about AI as something “on our horizon”. This means that boards are not as well positioned for working out the right way to adopt a technology that can think and learn like ­humans.

AI is far from perfect, given it relies on machine learning algorithms — still in early stages of learning — to understand tasks and achieve optimum decision-making. But the technology will improve, and businesses will absolutely need to ensure they balance the benefits of lower costs, increased efficiencies and innovation with the ethical responsibilities around liability and managing the impact of a technology that will disrupt jobs on a large scale.

While business leaders get the business value, when you consider that only one company in the ASX 200 has an ethics board, then you do wonder if companies and boards are sufficiently geared up to think through the ethical issues that AI throws up.

It may seem difficult to bring the ethics of AI into the boardroom but it’s hardly unexplored territory. There are many industry partnerships with the major technology players to guide the debate on policy. However, the responsibility shouldn’t fall just to those building the systems but also those who are adopting them. We must bring AI and ethics into the boardroom for a more open debate.

Responsible reskilling

It’s widely agreed that the displacement of jobs is one of the, if not the most critical, ethical issues when it comes to AI. Some influential voices believe it will be the first technology ever to destroy jobs on a permanent basis. I personally do not believe there is enough evidence to support that claim. However, we do know that it will disrupt the workforce as we know it, and we still have a long debate ahead of us in the area of skills and the future of work.

On a more optimistic note, analysts recently predicted that AI will become a positive net job motivator, creating 2.3 million jobs by 2020 while eliminating 1.8 million jobs. However, this will mean that AI will automate many manual and process-driven tasks, allowing employees to move into higher value roles. But the obvious question remains around how we ensure no one is left behind and that the second order effects on communities are dealt with in a responsible way.

We see now in measures like the Edelman Trust barometer overwhelming evidence that trust in corporations and governments is heading over a cliff. Unless companies, and those with responsibility for the common good, act in concert there is a risk of creating the kind of divided society that has ushered in Brexit and the election of Donald Trump as US President. If I were on a board, I would see red flags about the impact of AI on jobs and devote the time to consider its impact, on the bottom line and also on my workforce. This is why it must be on the agenda of boards and in our parliament to chart a new course for the education of our youth, and the re-education of those who are in roles most at risk of displacement.

Trusting machines

If an AI system makes a mistake, who is responsible? As much as you can use AI and hear of its benefits, by definition AI will still bring about mistakes, some of which may be quite significant, even lethal, and involve multiple parties.

The key for boards is to understand that the party responsible in ethical terms is the group that provided AI in the first place. For example, industry leaders such as Volvo have accepted full liability over the performance of their self-driving cars, a move designed to give consumers confidence in the new technology. Being transparent on company liability helps build the customer trust needed for an organisation’s licence to operate. This ethical term explains that businesses only exist because of the community’s level of acceptance of an organisation’s service to the common good.

Perhaps the most important point during the roundtable was one around data privacy. This is a highly sensitive area, as is evident with the continually evolving data protection laws in Australia. Many of the participating senior executives discussed openly the challenge of how to keep up with societal expectations concerning data collection, privacy and how AI is used with regards to an individual’s data. Maintaining the trust of the public is the key to a company’s licence to operate. Ethical negligence or careless use of data can erode customer trust and endanger a business’s licence to ­operate.

Take one example. The Economist reports that in Berlin now AI is being used to identify people who are LGBTIQ. It is incredible that such clearly discriminatory use of a technology with the power to stigmatise is legal. It further highlights why we must continue to advance the dialogue around ethics, particularly with the adoption of transformative and disruptive technologies like AI.

At the board level we have a responsibility, a privilege and an opportunity to chart the course of AI through the ethical issues of liability, jobs and trust. Only when we get these right will we truly begin to realise the societal and economic promise that AI brings.

Peter Collins is director, Centre for Ethical Leadership..

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/technology/we-need-a-more-open-debate-on-ai-and-ethics-in-the-boardroom/news-story/863457fc5886bd552f1c687be1399186