Will artificial intelligence trigger a nuclear war?
New military technologies are making the world a more unstable and dangerous place.
On September 26, 1983, Stanislav Petrov had an important decision to make – a decision that could have destroyed the world.
Petrov, a lieutenant colonel in the Soviet Air Defence Forces, was on duty when the satellite surveillance system he was monitoring spotted what looked like five American nuclear missiles heading for the Soviet Union. It was a tense time. Just three weeks earlier the Soviets had shot down Korean Air Lines Flight 007, killing all 269 passengers and crew.
Petrov’s orders were to alert his superiors, who probably would have launched an immediate counter-strike. Instead, believing that the US would not have launched a nuclear attack with just five missiles, he treated the warning as a false alarm. In fact, the early-warning satellites had mistaken sunlight reflected off clouds for incoming missiles.
A 2014 documentary immortalised Petrov, without much exaggeration, as the “man who saved the world”.
If the same incident had happened last week, we might not have been so lucky. The blue screen of death that wiped out 8.5 million Windows computers might have wiped out everything else, too.
The global computer outage, the result of a faulty update initiated by US software company CrowdStrike, offered an unnerving glimpse of something much bigger. This could be another outage or something more sinister.
According to Toby Walsh, Laureate fellow and Scientia professor of artificial intelligence at the University of NSW, those two scenarios “may well be one and the same”, with “malicious actors trying to take out our infrastructure – our hospitals, traffic lights, financial markets and air traffic control systems”.
The technology behind last week’s global meltdown was designed specifically to avoid such a disaster.
“The internet was invented to be a nuclear-proof communication network,” Walsh tells Inquirer. “The original funding by the US military was so that they would have a back-up way of communicating when the bomb had dropped. But it turns out it’s not as robust – by a long measure – as anyone thinks.”
Little has been disclosed about the effects of the CrowdStrike outage on military networks, but there is no reason to believe defence infrastructure is inherently less vulnerable to catastrophic disruption than its civilian counterpart. The cost of failure, however, could be incomparably higher.
“We know what happens when we put these complex computer systems together in an uncertain environment,” says Walsh. “We already do it – it’s called the stockmarket. Every now and again you get a flash crash, but there are circuit-breakers in place. You record all the transactions and you can stop everything and say: ‘OK, none of that happened – let’s unwind everything that went wrong in the last half-hour and give everyone their money back and start again.’
“It works with the stockmarket but it wouldn’t work if it was the demilitarised zone between North Korea and South Korea or some other flashpoint. You’ve already started a nuclear war. You can’t say everybody gets their lives back.”
The rapid adoption of artificial intelligence for military purposes – from controlling drone swarms to targeting nuclear weapons – adds another level of jeopardy to a system already fraught with technological risk.
Earlier this year a US State Department official, Paul Dean, called on Russia and China to declare that only humans – and not AI – would make decisions on the use of nuclear weapons.
While US officials have insisted that an AI system would never be entrusted with US nuclear launch codes or given control over US nuclear forces, the increasing use of AI-enabled technology for tasks such as analysing satellite imagery and predicting nuclear scenarios means that a decision by the US president to use nuclear weapons would likely be influenced by AI-supplied data.
Last year the US military began using a program called Stormbreaker, described as “an AI-based gaming and simulation capability”. In March this year, Admiral John C. Aquilino, until April commander of the US Indo-Pacific Command, told congress that the aim of Stormbreaker was to “develop a Joint Operational Planning Toolkit enabled by advanced technology that will include advanced data optimisation capabilities, machine learning, and artificial intelligence to support planning, war gaming, mission analysis” and what he called “execution of all-domain, operational-level course of action development” – in other words, military operations.
The Bulletin of the Atomic Scientists reported this week that NATO was already testing and preparing to launch an AI system “designed to assist with operational military command and control and decision-making by combining an AI war-gaming tool and machine-learning algorithms”. While it was not clear how such a system might influence decisions by NATO’s Nuclear Planning Group over the actual use of nuclear weapons, the bulletin commented that AI-powered analytical tools such as this “could be used to inform targeting and force structure analysis or to justify politically motivated strategies”.
In Britain a House of Lords committee investigating the use of AI in weapon systems published a report last December that cast doubt on the British government’s claims to be “ambitious, safe (and) responsible” in its use of AI for autonomous weapons. The report concluded darkly that “aspiration has not lived up to reality”.
While acknowledging that advances in AI had the potential to “bring greater effectiveness to nuclear command, control and communications”, the report also warned that the use of AI had the potential to “increase the likelihood of states escalating to nuclear use – either intentionally or accidentally – during a crisis”.
The compressed time for decision-making when using AI could lead, it suggested, to increased tensions, miscommunication and misunderstanding, while an AI system “could be hacked, its training data compromised, or its outputs interpreted as fact when they are statistical correlations, all leading to potentially catastrophic outcomes”.
Walsh points out that our success to date in avoiding nuclear wars has not been due to technology but to human agency and judgment – elements that are at risk of being marginalised by the rise of AI. Referring to near-misses such as the Cuban missile crisis and Petrov’s overruling of the Soviet nuclear alert, Walsh notes that “there have been a number of cases where it’s only because there has been some human oversight or intervention into the system that we were prevented from getting there. It was a human in the loop that prevented disaster happening.”
In Ukraine, the use by both sides of jamming systems to disrupt communications between pilots and drones has accelerated the development of AI-controlled drones. The use of these by the Ukrainians is seen by some as the key to Ukraine being able to defeat the more powerful and better armed Russians.
As well as identifying targets and mapping terrain for navigation purposes, AI will enable the Ukrainians to deploy drones in interconnected “swarms”, with humans intervening only at the last minute to give the go-ahead to automated strikes.
According to Serhiy Kupriienko, head of a software company called Swarmer, human pilots struggle to operate more than five drones. “When you try to scale up (with human pilots), it just doesn’t work,” Kupriienko told Reuters last week. “For a swarm of 10 or 20 drones or robots, it’s virtually impossible for humans to manage them.” AI, however, was capable of operating “hundreds” of drones simultaneously.
Technologies such as AI have already begun to make the world a more unstable, dangerous place, Walsh argues, by upsetting the balance of power.
“Until now it was only the superpowers, the United States, Russia and China, that could afford to buy these capable weapons systems,” he says, “but AI weapons systems are going to be cheap and cheerful. You can see that happening – Turkey is becoming a major drone superpower. It used to be that your military might was determined by your economic might, but in future your military might may be distinct from your economic might. That will be destabilising because the old alliances won’t be what we thought they were.”
At the start of the millennium, US and Israeli intelligence agencies began collaborating on a top-secret tool called Stuxnet, a powerful computer worm designed to disable a key part of the Iranian nuclear program. Stuxnet is believed to have been discovered when an office in Iran, unconnected to the regime’s nuclear program, found itself the victim of unexplained reboots and blue screens of death.
Two decades later the blue screen of death is more indiscriminate, more widespread and, as last week’s meltdown showed, beyond anyone’s control.