‘Everyone will die’: Alarm bells ring on super AI
A new book claims artificial superintelligence could wipe out humanity in the not too distant future, as Australia's top AI researchers give their take.
The emergence of artificial superintelligence (ASI) could kill us all in the not too distant future, a new book has claimed.
The book – written by pioneers in the field of AI risk and titled If Anyone Builds It, Everyone Dies – has rung the alarm bell on the perils of creating intelligence that surpasses our own.
Machine Intelligence Research Institute (MIRI) founder Eliezer Yudkowsky and its president Nate Soares, who co-authored the book, believe ASI could be “developed in two or five years, and we’d be surprised if it were still more than twenty years away”.
They are convinced it would signal the end of humanity, and hope their book scares people enough to pause development “as soon as we can for as long as necessary”.
“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die,” they write.
They explain on the MIRI website that AI labs were already “rolling out systems they don’t understand” and state “sufficiently intelligent AIs will likely develop persistent goals of their own”.
“A superintelligent adversary will not reveal its full capabilities and telegraph its intentions. It will not offer a fair fight,” they argue.
“It will make itself indispensable or undetectable until it can strike decisively and/or seize an unassailable strategic position. If needed, the ASI can consider, prepare, and attempt many takeover approaches simultaneously. Only one of them needs to work for humanity to go extinct.”
It sounds like something out of a science-fiction movie, but other AI experts do acknowledge the risk exists – at least in theory – even if they disagree whether ASI is at all close.
A big ‘if’
Daswin de Silva, a Professor of AI and Analytics at La Trobe University, explained the theory of how ASI – or what he referred to as a “digital species” – could overcome humans.
“What we see in evolution, any species smarter than the rest of the species tends to dominate,” he said.
“So, if the AI gets out of control, and it is smarter than humans, then it is quite likely that there would be an adverse effect.”
That might not look like “total annihilation”, he said, and might be the machines seeking “the equivalent to human rights”.
Losing control is the key, Professor de Silva said, and he believes achieving ASI was unlikely to happen any time soon based on the current trajectory of generative AI like ChatGPT.
“I think it’s a big if,” he said.
“Generative AI was clearly a breakthrough … but also more recently, there’s been a slowdown of what appeared to be an accelerated release of the different models.
“This slowdown is an indication of what is known in the AI research community that there’s two major flaws in how the current AI is trained – that it requires a lot of data and it has a (huge) energy bill.”
Professor de Silva said this appears to have been acknowledged by some of the industry’s biggest figures, who have in recent times “backtracked” on claims that artificial general intelligence (AGI) – the step before ASI – was imminent.
‘AGI Pilled’
In the world of big tech, “AGI Pilled” is the term coined for those who have pushed the idea that machines boasting general intelligence were close to being created.
AGI refers to the ability of a machine to understand, learn, and apply knowledge to any intellectual task, similar to human-level intelligence.
Major tech companies like Meta are pouring billions into developing more advanced AI models in what has become the modern equivalent of an arms race.
OpenAI boss Sam Altman wrote in a January his engineers were “now confident we know how to build AGI as we understand it”, saying it could emerge this year.
But in August, after what has been described as the underwhelming launch of ChatGPT5, Mr Altman appeared to change his tune.
“I think it’s not a super useful term,” he said when asked if the new model moved the needle closer to AGI.
Google DeepMind’s Demis Hassabis has also recently cooled on his timeline for AGI, saying in August there was a 50 per cent chance of it emerging by 2030.
Robin Li, chief of one of China’s biggest tech firms Baidu, said in May that AI that was smarter than humans was more than 10 years away.
What’s stopping AGI and ASI?
Professor Toby Walsh, a leading expert in AI at the University of New South Wales, said AGI can’t be created by throwing more computing and data at the models.
“We’re seeing diminishing returns already,” he said.
“Machines seems to have got smarter, smarter, but that is starting to run into some roadblocks and start running out of data.
“We’re going to have to invent some other things to go into the (programs) as well.”
Professor de Silva explained there were three major hurdles AI systems would need to overcome in order to achieve ASI.
One is supplying AI systems with the vast amounts of energy required, two is the lack of new data bots can use to get smarter and the third is the absence of a “world model”.
“AI is still stuck inside the computer or a server,” he said.
“You need to move it into the real world, and that requires robots in some ways.”
As for ASI turning on humans, Professor Walsh dismissed this theory as a fantasy: “It’s something people don’t need to worry about”.
“They’re not biological like us. Biological things have a desire to live and survive, procreate, and acquire more things.
“Machines to sit there waiting for us (to ask them) to do things.
“They have no desires to go off and dominate the world because they have no desires.”
What he is concerned about is that these discussions distract us from the real harms AI is causing already like the loss of jobs and more sinister results.
“AI bots encourage people to commit suicide,” he said as one example.
“Those are things we should be worried about, that people have built this technology that allows vulnerable people to take their own lives.”
Australia and AI
The Australian government has stated AI “will transform every sector of the economy and society” but poses risks as it explores what regulation should be in place.
The Productivity Commission last month projected adoption of AI could add $116 billion to the Australian economy, as it warned against implementing “AI-specific regulation”.
“Adding economy-wide regulations that specifically target AI could see Australia fall behind the curve, limiting a potentially enormous growth opportunity,” Commissioner Stephen King said.
Greg Sadler is the chief executive of Good Ancestors, a charity that develops and advocates for policies “to solve this century’s most challenging problems”.
AI guidelines have become a big focus for Good Ancestors, and it told the Productivity Commission this month that up to 93 per cent of experts believed Australia’s measures for managing the threat of even general-purpose AI were “inadequate”.
It is calling on Australia to be a leader in developing handrails for AI, warning a wait-and-see approach was dangerous due to the pace at which the technology was evolving.
“AI is causing harm now, and AI is on track to cause increasingly catastrophic harm in the future,” Mr Sadler said.
“The Australian Government needs to walk and chew gum at the same time. We can’t be excited only about the opportunities and ignore the risks.”
Researcher Michael Noetel from the University of Queensland said a survey showed most Australians wanted AI to be as safe as aeroplane travel, with 29 per cent saying the regulations for AI should be stricter than commercial flying.
A majority also were willing to wait as long as 50 years for advanced AI if it meant bringing the risk of a catastrophic outcome down from five to 0.5 per cent, he said.
“The people building the technology think there’s a one to five per cent chance of a catastrophic outcome, which means that we’re expecting … something like 80 million people dying,” Associate Professor Noetel said.
“And people were willing to wait 50 years for, for us to do this safely, even if it has to be radical benefits like curing cancer and alleviating climate change.”
Originally published as ‘Everyone will die’: Alarm bells ring on super AI
