NewsBite

How soon will AI become better than us?

The benefits and risks are far-reaching, and whoever leads in the development of artificial intelligence ‘will be ruler of the world’.

Will we be able to trust superintelligent machines whose understanding, abilities and raw power exceed those of humanity combined?
Will we be able to trust superintelligent machines whose understanding, abilities and raw power exceed those of humanity combined?

Once the preserve of technology geeks, artificial intelligence has gone mainstream – resulting in an explosion of digital and virtual capabilities. These range from the helpful personal assistants on our digital devices and the mapping of more than 160,000 new virus species, to troubling “deepfakes” that are impossible to detect with the naked eye.

OpenAI’s ChatGPT has helped democratise AI but also has raised many questions about the technology’s value and impact. Will it be genuinely transformational and a boon to society as advocates proclaim, boosting productivity, creating undreamed of prosperity and liberating us from life’s drudgeries? Or are the nay-sayers right, that its promise is over-hyped and the risks potentially catastrophic without guardrails and humans in the loop?

Supporters point to the undoubted workplace efficiencies AI brings and its centrality to a new “boundaryless ecosystem of collaboration that spans continents, time zones and cultures”.

A recent study by the International Labour Organisation concluded that AI was likelier to enhance job roles than eliminate them. Big Tech is overwhelmingly bullish, downplaying the dangers and arguing that sharing code allows engineers and researchers across the industry to identify and fix problems.

AI risks 'keep us all awake at night', says UN official

But the public is unconvinced. And the nay-sayers are gaining ground. Polling from the Lowy Institute shows more than half Australians believe the risks of AI outweigh its benefits. Serious people, including those with deep knowledge of the technology, are beginning to sound the alarm.

Geoffrey Hinton, who has just won the Nobel prize in Physics and is regarded as the godfather of AI, says while AI systems bring incredible promise the risks “are also very real and should be taken seriously”.

Technophile Elon Musk has branded AI “the most disruptive force in history”.

Soroush Pour, chief executive of tech start-up Harmony Intelligence, testified at a Senate inquiry into AI that the “next generation of AI models could pose grave risks to public safety”. And an analysis for the well-regarded International Institute for Strategic Studies concludes: “Despite the best efforts of researchers and engineers, even our most advanced AI systems remain too brittle, biased, insecure and opaque to be relied upon in high-stake use cases. Put simply, we do not know how to build trustworthy AI systems.”

2024 Nobel prize in Physics awarded to two scientists for work on AI

Researchers and tech executives have long believed AI could fuel the creation of new bioweapons, turbocharge hacking and allow bad actors to steal the technology and use it for nefarious purposes, including military domination. Superintelligent machines might go rogue and pose an existential threat to humans, giving rise to the question: How do we prevent AI from becoming a single point of societal failure while preserving its undoubted benefits?

Time is running out to provide answers because the technology is accelerating at a blistering pace driven by order of magnitude increases in computing power and the data that feeds the large language models essential for machine deep learning.

A decade ago, AI could barely identify images of cats and dogs. Early versions of ChatGPT had the capacity of primary school students. But the latest version (GPT-4) performs at the level of a smart high-schooler. It can write competent code and essays, reason through difficult math problems and outperform most students at secondary school.

The next big leap is shaping to be even more extraordinary. Generative AI will initiate an explosion of knowledge leading to superintelligence, often referred to as the “singularity”.

Joi Ito, digital architect and president of Japan’s Chiba Institute, says AI is moving at incredible speed. It will be able to do most routine tasks within three years and then become “better than us”.

The technology is accelerating at a blistering pace.
The technology is accelerating at a blistering pace.

Some insiders say the singularity could occur as early as 2027 when AI will become automated, allowing machines to program machines. That could mean the AI labs of 2027 doing in a minute what took the models used for training GPT-4 three months.

According to former OpenAI technical program manager Leopold Aschenbrenner, superintelligent AI systems would be “vastly smarter than humans, capable of novel, creative, complicated behaviour we couldn’t even begin to understand – perhaps even a small civilisation of billions of them”. We are building machines that can think and reason, he says. They will be able to solve robotics and make dramatic leaps across other fields of science and technology. An industrial revolution will follow marked by unprecedented increases in productivity and economic growth.

It took 230,000 years for the global economy to double as civilisation went from hunting to farming but only 15 years to double after the adoption of modern manufacturing techniques. It’s conceivable superintelligent machines could increase productivity exponentially within a decade as armies of computer-generated avatars, operating like remote workers, become fully functional in any workplace environment. But will we be able to trust superintelligent machines whose understanding, abilities and raw power exceed those of humanity combined?

Tech entrepreneur and University of Adelaide chair of creative technologies Thomas Hajdu says: “The rise of generative AI is not just another technological advancement; it is a seismic shift that will reshape the landscape of cognitive work.” As AI becomes more sophisticated, “our ability to discern when to trust its judgments and when to apply human insight will be crucial”. Solving the discernment problem – the ability to distinguish quality, relevance and ethical implications – becomes paramount.

Industry and Science Minister Ed Husic concedes the challenge posed by AI risks to public safety. Picture: NewsWire / Martin Ollman
Industry and Science Minister Ed Husic concedes the challenge posed by AI risks to public safety. Picture: NewsWire / Martin Ollman

That’s why the issue of AI safety is rising to the top of policy agendas globally. Many countries, including Australia, are moving to put guard rails around AI to ensure humans retain control and the risks to public safety are minimised. As a first step, the Albanese government has introduced voluntary codes on the use of AI that eventually will lead to a comprehensive regulatory regime and a new Australian AI Act. Science Minister Ed Husic concedes it’s “one of the most complex public policy challenges for governments across the world”.

These challenges are going to get only bigger and more complex as the race for AI supremacy heats up. Finding enough money and power are two of the most pressing.

AI is not just a set of advanced computer chips and some clever software. Realising the technology’s vast potential will require trillions of dollars and the rapid construction of a supporting infrastructure of dedicated power plants and tech clusters, including large new chip fabrication plants.

Doubters contend that the financial and infrastructure demands will constrain AI’s progress. Optimists disagree, pointing to the rapid scaling up of investment and infrastructure in the US, the world’s leading AI nation.

Aschenbrenner says AI has set in motion an “extraordinary techno-capital acceleration” that may surpass anything yet seen. Revenue for AI products is heading towards $US100bn ($148.6bn) by 2026 from virtually zero a decade ago. The global military AI market is only about $US9bn but is expected to triple in the next eight years.

Deloitte, EY, KPMG and PwC say some jobs will go because of artificial intelligence but others will be created as the industry competes to be top dog in AI.
Deloitte, EY, KPMG and PwC say some jobs will go because of artificial intelligence but others will be created as the industry competes to be top dog in AI.

Total AI investment could exceed $US1 trillion by 2027 and we soon could see $US100bn AI clusters of large language models with their supporting infrastructure. AI-specific chips will be common by the end of the decade, stimulating a further leap in capability of the machine learning algorithms that will provide the computational power for GenAI.

Power of another kind may be the most important short-term constraint. AI is a voracious consumer of electricity partly because of the skyrocketing demand for data centres needed to process, store and cool AI data. Some computer clusters require as much energy as a medium-sized city. One large data centre complex in the US state of Iowa burns the equivalent of seven million laptops running eight hours a day according to its owner, Meta.

On present trends the US will have to boost electricity capacity by more than 20 per cent by 2030 just to meet AI demand. A recent report by the Tech Council of Australia concludes that data centres consume 5 per cent of energy from the grid, with some analysts expecting that to double by 2030. Globally, the World Economic Forum estimates that AI energy requirements are growing 26-36 per cent a year.

Finding this additional power will have major implications for energy and climate change policy because Big Tech, previously an enthusiastic supporter of renewable energy, is becoming agnostic about the power’s source. In an interview for The Washington Post, Aaron Zubaty, chief executive of a Californian company that develops clean energy projects, says: “The ability to find power right now will determine the winners and losers in the AI arms race. It has left us with a map bleeding with places where the retirement of fossil fuel plants is being delayed.”

But the biggest challenge will be how to ensure AI doesn’t unleash powers of destruction that could surpass the advent of nuclear weapons. They profoundly reshaped war because for the first time humans had the means to destroy themselves.

A cooling tower at the Zaporizhzhia nuclear plant in Ukraine, on fire after a drone attack. Picture: X/Twitter.
A cooling tower at the Zaporizhzhia nuclear plant in Ukraine, on fire after a drone attack. Picture: X/Twitter.

Military power and technological progress have been tightly linked historically. The first computer was born in war. Today’s computers are ubiquitous and pivotal to defence capabilities, with AI poised to transform the future battlefield. This is already evident in Ukraine, which has become a global test bed for new weaponry. A proliferation of small, cheap drones increasingly enabled by AI are helping the Ukrainian armed forces to balance the combat ledger against more numerous and better armed Russian forces.

Israeli operations against Hezbollah and Hamas use sophisticated AI programs with names such as Gospel and Lavender to process vast amounts of data to generate thousands of potential targets for assassinations and military strikes. Even Hezbollah is using AI, notably to help its surveillance drones evade Israel’s multi-layered air defences – underlining its broad appeal as an equal opportunity technology.

Drones on their own are merely disruptive. But combining them with “digitised command and control systems”, incorporating AI-enhanced “new era meshed networks” of commercial and military sensors, delivers a game-changing “transformative trinity”, write two retired generals, Australia’s Mick Ryan and America’s Clint Hinote. These capabilities allow soldiers on the frontlines to see and act on real-time information previously held in distant headquarters.

But “superintelligence will be the most powerful technology – and most powerful weapon – mankind has ever developed”, says Aschenbrenner. It will give the first country able to harness its power a decisive and potentially revolutionary military advantage. “Authoritarians could use superintelligence for world conquest and to enforce total control internally. Rogue states could use it to threaten annihilation.”

These possibilities have not escaped the notice of world leaders and defence policymakers.

In 2017, Russian President Vladimir Putin asserted that the country that became the leader in AI development “will be the ruler of the world”. China has telegraphed that it wants to become the global leader in AI by 2030. And in a 2019 speech, US defence secretary Mark Esper declared “whichever nation harnesses AI first will have a decisive advantage on the battlefield for many, many years”.

Israel’s defence technology has proved to be ‘extraordinary’ against Iran attack

Decisive could mean as little as a one or two-year lead. Who wins this race will have major consequences for the global balance of power and democracies. If the autocrats win, they will use AI to ruthlessly enforce control over their own populations and impose their will on other countries. Dictators aren’t too concerned about ethical, legal or privacy concerns.

For now the US is ahead thanks to huge investments by the AI “fab five” – Apple, Microsoft, Nvidia, Alphabet (Google) and Meta (Facebook). But China is mobilising for the race, ramping up investment in its own large language models and AI clusters. China’s AI model market is projected to reach about $US715bn by 2030. Even so, the best Chinese laboratories are only equivalent to second-tier US labs and they are uncomfortably dependent on American open-source AI, a dependency China’s leader, Xi Jinping, is determined to redress.

China’s big advantage is a superior capacity for industrial mobilisation. It can close the gap by outbuilding the US on AI infrastructure, as it has in electricity generation, solar power, electric vehicle batteries and shipbuilding.

But the greater risk is that Beijing will steal its way to victory by covertly acquiring the digital blueprints and key algorithms that are the crown jewels of American AI. These are highly vulnerable for two reasons. They are poorly protected. And many leading AI researchers and companies are philosophically opposed to restrictions on access to their work, believing the benefits of sharing outweigh the risks.

This perception is changing rapidly as the Biden administration recognises the dangers of advanced AI falling into the wrong hands, which could jeopardise national security and wipe out the gains from billions of dollars of US investment, research and development.

A worst-case scenario is that smart machines could perversely endanger humanity because of unforeseen biases or unanticipated outcomes when used in war.

Chinese spy agency accused of sustained cyber espionage

Former US Joint Chiefs of Staff chairman Mark Milley and former Google chief executive Eric Schmidt write in the policy journal Foreign Affairs that war games conducted with AI models have found “they tend to suddenly escalate to kinetic war, including nuclear war, compared with games conducted by humans”. Milley and Schmidt argue that even if China doesn’t co-operate, the US should ensure its own military AI is subject to strict controls, kept under human command and conforms to liberal values and human rights norms.

Perhaps the last words should be left to Stephen Hawking, the inspirational British theoretical physicist and author who thought deeply about AI. He warned: “It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.”

Alan Dupont is chief executive of geopolitical risk consultancy The Cognoscenti Group and a non-resident fellow at the Lowy Institute.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/inquirer/how-soon-will-ai-become-better-than-us/news-story/d65ef74039b6054bb155b20d6ae2368b