NewsBite

Sponsored by Protiviti

AI risk v reward: striking the right balance

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

Australia’s proposed Safe and Responsible AI framework could mark a turning point for businesses embracing artificial intelligence, says Leslie Howatt, technology lead at global consulting firm Protiviti.

The federal government’s draft regulations to govern artificial intelligence, seek to introduce “mandatory guardrails” designed to mitigate risks from high-risk and General Purpose AI (GPAI) systems.

The framework calls for transparent, accountable, and risk-managed AI practices, echoing the European Union’s AI Act but tailored to Australia’s legal and technological landscape. These guidelines aim to prevent misuse while encouraging innovation, striking a balance between growth and governance.

AI systems must be transparent, fair and secure, with safeguards in place to mitigate potential risks. iStock

“The federal government’s draft regulations highlight the need for clear governance structures and ethical standards, particularly for high-risk applications like healthcare and critical infrastructure,” says Howatt.

While the framework’s focus on risk mitigation is welcome, Howatt cautions that businesses must move quickly to align with these changes or risk falling behind in a rapidly evolving regulatory environment.

Advertisement

The draft framework, released by the federal government for public consultation, aims to create a foundation for responsible AI use across industries. It sets out principles for transparency, accountability, and fairness, with a particular focus on high-stakes sectors. The initiative is designed to balance fostering innovation with mitigating risks, including biases in automated decision-making, data privacy concerns, and security vulnerabilities in AI systems.

Howatt says that the framework’s emphasis on collaboration with industry stakeholders is key.

“By involving businesses in shaping these guidelines, the government is providing an opportunity for proactive engagement. This collaboration can ensure regulations are both practical and robust, giving Australian organisations a competitive edge globally while safeguarding public trust in AI technologies,” she says.

The proposals have sparked discussion across the sector, with legal and tech experts raising concerns about enforcement challenges, particularly for AI systems developed abroad. Questions also surround the broad designation of “high-risk” applications and their potential implications for sectors such as healthcare, finance, and education. Supporters, however, argue that establishing these guardrails is crucial to protect against bias, security vulnerabilities, and other unintended harms.

The debate comes as new research reveals that Australian employees are already using AI to boost productivity, often without their employers’ knowledge. A recent Microsoft survey found that 84 per cent of knowledge workers use generative AI at work, with 70 per cent bringing their own AI tools (BYO AI).

This “shadow AI” phenomenon reveals AI’s transformative potential but also raises questions about governance and transparency, and highlights how balancing AI’s benefits with effective risk management is crucial for the future of work.

The upsides

AI’s transformative impact ranges from streamlining complex processes to creating risk reports, automating workflows, and analysing data. “In the financial services sector, for example, banks are using AI for fraud detection and pattern recognition, credit scoring, and trading,” says Leslie Howatt, technology lead at global consulting firm Protiviti.

“AI offers transformative opportunities enabling innovation and service enhancements previously unimaginable. However, without proper oversight, AI implementation can lead to security vulnerabilities, ethical issues, and compliance challenges. AI systems must be transparent, fair, and secure, with safeguards in place to mitigate potential risks,” says Howatt.

Leslie Howatt, technology lead at global consulting firm Protiviti. 

Proceeding with caution

Rapid adoption, often outside the purview of management, means pitfalls can easily get out of hand. Governance isn’t intended to slow down AI adoption and implementation, but rather, to create new opportunities by accelerating and optimising its responsible and effective use, says Howatt.

Archana Yallajosula, technology director at Protiviti, says: “We’re all vulnerable to biases and errors that may be hidden in AI models or algorithms.”

“This could range from AI systems perpetuating biases in society through to mistakes in facial recognition technology, incorrect financial market algorithms, deepfake recordings, or even a healthcare misdiagnosis if an algorithm is trained on a flawed dataset.

“This is where human intervention and guardrails become fundamental. By implementing rigorous governance frameworks and policies, organisations can identify and mitigate AI-associated risks, without missing out on the opportunities AI presents.”

Crisis simulations for example are becoming increasingly common to test AI systems.

“These simulations mirror real life scenarios and deliberately try to break the AI system by finding biases, testing robustness, checking security, and ensuring ‘do no harm’ principles are firmly in place,” says Yallajosula.

“At Protiviti, another way we’ve balanced AI risk versus reward is through creating ProGPT.

Archana Yallajosula, technology director at Protiviti. 

“Extensively trained with internal company data, we have full control over this system, which has been created in alignment with our governance frameworks. Not only have we seen increased efficiencies in internal operations, we’re able to help organisations with predictive modelling and enhanced workflows, so we can provide greater value to our clients.”

Preparing for governance

At the same time, Australia’s National AI Centre is working with industry to help organisations balance AI risks and rewards.

The National AI Centre are collaborating with the Institute for Applied Technology—Digital (IAT-D) to provide 1 million free AI micro skill courses to all Australians and has also produced video explainers with the Human Technology Institute (HTI) on AI compliance and procurement challenges, as well as risk management.

These efforts underscore the importance of equipping both businesses and individuals with the necessary tools and knowledge to navigate the evolving landscape of AI responsibly.

Five key actions

These steps can be followed to strike the right balance in your AI governance journey.

  • Establishing a committee for cross-functional decision-making and accountability.
  • Providing guidelines for ethical AI development and deployment.
  • Training employees on AI technologies and best practices.
  • Tracking AI initiatives and capabilities for risk assessment and compliance.
  • Focus resources on the most critical AI risks to mitigate potential harm.

The future of AI-enabled businesses and societies hinges on balancing significant rewards with inherent risks. Adopting a structured approach to AI governance, risk management, and ethics is essential to decrease the likelihood of unauthorised AI usage slipping under the radar in the workplace, and ultimately, protecting the community at large.

Implement robust governance for successful AI adoption in your business. Find out more.

Sponsored by Protiviti

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

Read More

Latest In Technology

Fetching latest articles

Most Viewed In Technology

    Original URL: https://www.afr.com/technology/ai-risk-v-reward-striking-the-right-balance-20240819-p5k3hs