NewsBite

A question of ethics: AI faces its most important crossroad

As the industry arrives at the crossroads of AI, Merkle’s Jack O’Neill asks can we act to harness the technology responsibly or will we let it run mostly unchecked?

Jack O’Neill is client partner at Merkle, a Dentsu company.
Jack O’Neill is client partner at Merkle, a Dentsu company.

Artificial Intelligence, especially generative AI, is no longer the stuff of science fiction – it’s here, it’s powerful, and it’s transforming every facet of our lives. From healthcare and finance to environmental management and entertainment, its influence is per­vasive. But here’s the catch: while AI holds immense promise, it also poses significant risks if not guided by strong ethical principles.

Australian business stands at a crossroads. Will we harness AI responsibly to benefit all, or will we let it run mostly unchecked, risking societal harm and eroding public trust?

The high stakes of ignoring AI ethics

Imagine a future where AI systems make decisions that unfairly discriminate, invade privacy, or operate without accountability. Without ethical oversight, this isn’t a dystopian fantasy – but a looming reality. Missteps in AI can lead to public backlash, legal troubles, and long-term damage to brand reputations. Some 58 per cent of Australians are concerned about privacy and security issues with AI accessing their personal data; they need transparency, according to Dentsu’s Data Consciousness Project. Companies that ignore these demands risk losing customer loyalty and market share.

Experts, including two of the three “godfathers” of modern AI, Geoffrey Hinton and Yoshua Bengio, believe society isn’t putting enough of a priority on the risks of AI misuse, focusing instead on pushing the boundaries of innovation whatever the cost to society.

But we can’t ignore the risk to society. We need to be able to understand and address how AI can falter, in three pivotal ways.

First, it’s only as good as the data we feed it. If that data is biased, unrepresentative, or flawed, the AI’s decisions will mirror those imperfections – sometimes with serious repercussions. This year New York City council’s new AI chatbot designed to help businesses navigate regulations went awry due to flawed training data. Instead of promoting compliance, it advised companies to ignore legal requirements, essentially telling them to break the law. This was due to incomplete data that failed to cover the complex legal landscape.

Second, AI “hallucinations” occur when systems generate outputs that are false, misleading, or downright nonsensical, yet present them as accurate, leading to misinformation and eroding trust. In April, Elon Musk’s AI chatbot Grok hallucinated publicly and accused NBA start Klay Thompson of going on a vandalism spree in California. To be clear, he did no such thing. It was pointed out that Grok likely confused a common basketball term in which players are said to be throwing “bricks” – when they take an air ball shot that doesn’t hit the rim – with vandalism.

Third, as AI becomes more sophisticated, there’s a danger that humans might over-rely on it, removing responsibility to think critically and make informed judgments. This complacency can allow errors to go unchecked and ethical oversights to multiply, undermining the very benefits AI seeks to provide. Famously in 2023, a lawyer at Levidow, Levidow & Oberman relied on ChatGPT to research precedents for a case. But at least six of the cases submitted in the brief didn’t exist. The result was a fine for the business and the case being thrown out, leading to significant brand damage to the firm.

The importance of having human oversight

To prevent AI systems from making flawed decisions that could lead to financial troubles or damage reputations, businesses must implement rigorous data management and ethical practices. This means ensuring all training data is accurate and truly representative of Australia’s diverse population – regularly auditing datasets for ­biases and inaccuracies.

Ethical data sourcing is just as crucial: collect data responsibly, respect privacy laws, obtain necessary consent, and avoid perpetuating stereotypes or discrimination. Moreover, involving diverse teams in AI development brings varied perspectives, helping to spot biases that homogeneous groups might overlook. By prioritising data integrity and ethics, companies can safeguard against the pitfalls of flawed or biased training data.

We need to maintain a level of human oversight and human verification mechanisms, aligning with best practice policies laid out by the federal government to ­ensure we’re not missing critical decision-making processes.

Continuous staff training about AI limitations and the importance of critical thinking is essential, emphasising that AI is a tool to aid, not replace, human judgment. By fostering a culture of scepticism and verification, Australian businesses can avoid AI errors and maintain trust with their stakeholders.

Call to action: Embrace ethical AI or risk being left behind

The race for AI advancement is on, but without ethics or guardrails, it’s a race to the bottom. Companies that ignore ethical principles may gain short-term advantages but will ultimately face backlash – from consumers, regulators, and society at large.

AI systems must operate in alignment with human values. Brands must put people and community above the short-term advantages and focus on driving AI innovation within an ethical framework. This is the only way AI can drive positive results for people, society and business.

Jack O’Neill is client partner at Merkle, a Dentsu company.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/growth-agenda/a-question-of-ethics-ai-faces-its-most-important-crossroad/news-story/256133df9ca55a6c298f4c296a58f3ec