Opinion
Don’t fear DeepSeek – Australia can launch its own start-ups
Raffaele Ciriello
Senior lecturerDeepSeek’s rise is a David v Goliath moment. A little-known Chinese start-up launched an open-source AI model powerful enough to rattle Silicon Valley’s titans. Within hours, $US600 billion ($955 billion) vanished from Nvidia’s market value, and big tech lost over $US1 trillion combined.
DeepSeek’s success shattered the myth that artificial intelligence leadership requires billion-dollar venture capital and corporate monopolies. It proved that efficient, open-source AI is a transformative force, levelling the playing field. But DeepSeek is neither the first David nor the last. Open-source AI is a seismic shift, with platforms such as Mistral (France) and Hugging Face’s DistilBERT (US) demonstrating that cutting-edge AI can be built cheaply, efficiently and with minimal computational resources.
DeepSeek’s success shattered the myth that AI leadership requires billion-dollar venture capital.Credit: Bloomberg
Australia now has a once-in-a-generation chance to lead – to become a David itself. Clear AI regulations can ensure AI serves the public good, not big-tech monopolies or foreign powers. Yet, reactionary policies like the DeepSeek ban on government devices show a shortsighted focus on risk over opportunity. To achieve true digital sovereignty, Australia must move beyond Cold War rhetoric and lead responsible AI governance. The question is no longer whether AI should be regulated, but how to regulate it responsibly – without stifling progress or enabling harm.
Open-source AI, such as DeepSeek’s MIT-licensed model, is free to use. Anyone can change the code and run it on consumer-grade hardware. This is its greatest strength but also its key risk. On the upside, it boosts competition, efficiency and transparency by activating only the essential parts of the AI model, reducing its size without sacrificing performance – much like MP3 simplified music storage. These advances make AI more cost-effective and less reliant on high-powered hardware. For Australia, this is a big opportunity to compete with global tech giants.
On the other hand, it is harder to regulate. Unlike proprietary AI, where companies face legal oversight and shareholder scrutiny, open models can be used by anyone – including bad actors. Without safeguards, they risk being weaponised for disinformation and cybercrime. Therefore, we must enforce AI regulations that ensure transparency, accountability and responsible development. The question remains: Who controls AI?
The US and its allies fear that China could use AI to expand surveillance, export authoritarianism or tilt global power in its favour. Some argue an open-source AI model from China could become a Trojan horse for foreign interference. The reality is more complex.
DeepSeek explicitly states that user data is stored on Chinese servers, while OpenAI’s privacy policy is more ambiguous about where user data is stored or who has access. OpenAI has repeatedly shared user data with the US government, proving that AI surveillance extends beyond China. If Australia wants true AI sovereignty, it must establish enforceable frameworks that hold all providers accountable – whether from Silicon Valley or Beijing.
Open-source AI also reignites a classic environmental dilemma. Its modular design reduces computational waste and makes AI more energy-efficient. DeepSeek consumes a fraction of ChatGPT’s resources.
But efficiency does not guarantee sustainability. Efficiency gains are often offset by an exponential rise in usage, leading to a net increase in resource consumption. This is known as Jevons’ Paradox or “rebound effects” – if we invent more fuel-efficient cars, people buy bigger cars, more cars and drive them more often. If AI becomes drastically cheaper and more accessible, electricity demand could skyrocket, straining grids and increasing carbon emissions. If we fail to act, AI models could become one of the world’s biggest energy drains. Australia must align AI with renewable energy, mandate energy-use disclosures and ensure sustainability is built into regulation – not treated as an afterthought.
One of the biggest unanswered questions about open-source AI is: how will it sustain itself? DeepSeek may have rattled Silicon Valley, but disruption cuts both ways. Chinese tech giant Alibaba’s Qwen-max2.5 is already outpacing DeepSeek in efficiency and functionality, proving that no AI leader can afford to stand still.
While DeepSeek is currently free, OpenAI monetises ChatGPT through subscriptions and corporate licensing. AI development is expensive. To survive, open-source AI must adopt sustainable revenue models, such as enterprise licensing, paid premium features or usage fees – strategies that have succeeded for Linux and Firefox.
But with great power comes great responsibility. If DeepSeek – or any AI provider – chooses the wrong business model, it won’t just mirror the surveillance capitalism of Facebook and Google – it will supercharge it. Relying on intrusive ads, data harvesting, or manipulative algorithms risks entrenching the very exploitation that open-source AI was meant to counter.
To prevent harm from escalating, Australia must proactively regulate AI business models, ensuring that monetisation strategies prioritise ethical transparency over profit extraction. A particularly concerning trend is AI companions that simulate empathy to manipulate users into oversharing and fostering unhealthy dependence, a serious risk for minors.
DeepSeek’s rise proves AI doesn’t have to be monopolised by a handful of corporations, but without strong governance, open-source AI could lead to misuse, environmental harm or financial instability. Australia must act now to ensure AI serves society – not just commercial or foreign interests. The nation has a once-in-a-generation opportunity to lead in responsible AI governance by prioritising:
- Digital sovereignty, developing AI on Australia’s terms with strong privacy protections.
- Inclusive design, ensuring AI reflects Australia’s pluralistic society through participatory co-design that includes diverse stakeholders.
- Sustainability, mandating transparent energy disclosures and renewable energy use from AI providers.
- Ethical regulation, banning exploitative business models and implementing special safeguards for minors, neurodivergent individuals and other vulnerable groups.
- Global leadership, shaping international AI standards by contributing to global governance frameworks for digital services.
The European Union has just announced the InvestAI initiative, mobilising €200 billion (331 billion) for development to enhance its competitiveness against the US and China. Australia must not fall behind.
The window of opportunity is wide open. The question is whether Australia is bold enough to lead AI towards serving everyone, everywhere, equitably.
Raffaele Ciriello is a senior lecturer in business information systems at the University of Sydney.
The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.