Clear and present danger: DeepSeep underscores AI’s risks
DeepSeek underscores the need for regulatory measures that protect Australians regardless of where AI operations are based, with the Chinese model’s tools able to extract insights from public and private data.
Business
Don't miss out on the headlines from Business. Followed categories will be added to My News.
Artificial intelligence is rapidly transforming Australia’s social and economic landscape, driving innovation across industries from finance to healthcare.
Yet as AI systems become more embedded in daily life, they introduce new public policy concerns, particularly around data access, jurisdictional issues and the environmental impact.
The Australian government has engaged extensively on this and has established an expert advisory group; however, it must accelerate its response and urgently address these challenges, especially as many AI systems are operated by companies headquartered offshore, outside of Australian jurisdictional reach.
The opposition must support a bipartisan response; this is not the time or the issue to play politics.
AI systems rely on a vast array of diverse and dynamic data sources, which makes integrating data effectively a significant challenge for organisations. In sectors like finance, where data is highly sensitive, poor data management can lead to breaches, errors and security risks.
Under the Australian Privacy Principles and the Financial Services Reform Act, financial institutions must protect customer information rigorously. AI introduces new complexities here: when data from multiple sources is combined without proper controls, individuals’ privacy is at risk, and institutions face potential legal repercussions. Clear legal and regulatory standards are needed with a very strong enforcement regime to ensure companies manage data responsibly, especially as AI systems grow more sophisticated and interconnected. The question of who accesses and controls AI data is also critical.
Many AI systems operate offshore, making it challenging for Australian regulators to monitor compliance with local privacy laws. Australians may unknowingly share sensitive data with foreign entities, placing it at risk of misuse or unauthorised access.
Without jurisdictional reach, Australian authorities struggle to investigate breaches or enforce compliance, particularly when personal data is stored in countries with weaker privacy regulations.
This lack of control underscores the need for regulatory measures that protect Australians regardless of where AI operations are based. One current example of AI left unchecked is DeepSeek, an advanced language model trained to mine and analyse vast amounts of sensitive data, often without clear disclosure of its operations.
DeepSeek’s tools have been used by organisations to extract insights from public and private data; yet its algorithms have been known to produce “hallucinations” or inaccurate outputs, with significant consequences.
It also operates in a prism of self-censorship. Ask it about the history surrounding the Tiananmen Square protests, and you get nothing critical of the incident or the Chinese government response.
In fact, you just get nothing. This demonstrates the high risks of operating advanced AI systems without robust regulations to enforce transparency, accountability and accuracy.
AI tools like DeepSeek highlight the urgent need for governments to act before unregulated models cause further harm. Hallucinations are especially problematic in industries where accuracy is essential, such as healthcare, finance and law.
In financial services, for example, an AI system making decisions based on incorrect data could result in costly errors or even regulatory breaches.
Rigorous validation mechanisms are needed to control false positives and ensure reliable outcomes, but current AI models often lack these safeguards.
Regulated industries where trust and accuracy are paramount must prioritise robust validation frameworks to ensure AI outputs remain reliable and safe for end-users.
The problem of offshore AI operations introduces considerable risks. Many leading AI providers are headquartered abroad, often in the US (the USA Patriot Act gives the US government unfettered access to the data stored there), and process data on foreign servers.
This creates jurisdictional challenges for Australian regulators. If data is stored or processed offshore, Australian authorities may have limited ability to monitor and control it, or seek legal recourse in cases of data misuse or abuse.
In highly regulated sectors like finance, this lack of jurisdictional control is particularly concerning as it leaves Australian organisations and individuals vulnerable to the policies and practices of foreign entities.
To mitigate this, Australia should seek international agreements to enforce cross-border privacy and data protection standards. Partnerships with key countries could enable Australia to hold foreign entities accountable for handling Australian data responsibly.
Without such agreements, Australians are exposed to the decisions and risks imposed by offshore operators with limited means of recourse or protection.
AI operations demand substantial computational power – particularly advanced language models and neural networks that underpin many of today’s AI systems. This energy demand contributes significantly to carbon emissions – largely due to the cooling requirements of data centres.
If AI systems grow without consideration for their environmental impact, they could threaten Australia’s carbon reduction targets under agreements like the Paris Accord.
One solution is to encourage renewable energy use in data centres – potentially through local incentives that prioritise green energy sources for high-demand industries like AI.
Left unchecked, the environmental cost of AI could outweigh its social and economic benefits, putting Australia’s sustainability goals at risk. Addressing this issue is critical to ensuring AI development aligns with the country’s long-term environmental objectives. In the financial (and insurance) industry where AI use is on the rise, responsible AI practices are essential to protect consumer trust and regulatory compliance. Financial institutions must incorporate principles of transparency, fairness and accountability into their AI frameworks to ensure ethical and effective use.
AI decision-making processes should be explicable, particularly in areas like lending and risk assessment. Consumers and regulators need to understand how decisions are made; without transparency, institutions risk undermining customer trust and regulatory compliance.
Bias in AI models can lead to unfair outcomes that disproportionately impact certain demographic groups. Ensuring fairness in AI-driven financial services is not only an ethical obligation but also a regulatory one; biased decisions in lending or credit scoring can have serious consequences.
Organisations must be able to trace and address AI-generated errors, ensuring consumers are not unfairly affected by inaccurate or harmful outputs. Accountability safeguards also help reassure customers that AI tools are deployed with their wellbeing in mind.
AI holds immense potential to benefit society; however, without careful and immediate regulatory action, it also poses substantial risks. The challenges of data access, offshore jurisdiction and environmental impact require an urgent and co-ordinated policy response.
The time to act is now.
Philip Dalidakis is former Victorian minister for innovation & the digital economy and now managing partner at Orizontas
More Coverage
Originally published as Clear and present danger: DeepSeep underscores AI’s risks