NewsBite

Atomic bombs, dangerous new technology: races driven by fear that malign opponents will get there first

The atomic bomb conundrum was the first time anyone had to contemplate a risk to humanity – but it almost certainly will not be the last, given the proliferation of potentially dangerous new technologies.

A mushroom cloud is commonly associated with a nuclear explosion.
A mushroom cloud is commonly associated with a nuclear explosion.

There is a scene in Oppenheimer, Christopher Nolan’s brilliant new film about the head of the scientific team that developed the first atomic bomb during World War II, that depicts an exchange between J. Robert Oppenheimer and Colonel (later general) Leslie Groves, the military officer in overall charge of the project.

The dialogue goes as follows:

Groves: “Then are we saying there’s a chance that when we push that button, we destroy the world?”

Oppenheimer: “The chances are near zero.”

Groves: “Near zero?”

Oppenheimer: “What do you want, from theory alone?”

Groves: “Zero would be nice!”

Nolan is engaging in a bit of cinematic licence here. There is no evidence this specific dialogue occurred. There was, however, genuine concern among Oppenheimer and some of the other scientists, especially Edward Teller, who was key to the later development of the hydrogen bomb, that the extreme heat and conditions created by detonating a fission bomb could trigger a chain of fusion reactions in the Earth’s atmosphere and oceans that would destroy humanity and possibly all life on Earth.

Robert Oppenheimer and Major General Groves at the New Mexico atomic test site.
Robert Oppenheimer and Major General Groves at the New Mexico atomic test site.

The first public disclosure of these concerns came in a 1959 interview by American writer Pearl S. Buck with Arthur Compton, a Nobel laureate physicist and Oppenheimer’s immediate boss during the bomb development.

In the interview, Compton described a discussion he had with Oppenheimer in the lead-up to the first detonation. The Compton-Buck interview included this exchange:

Compton: “Hydrogen nuclei are unstable, and they can combine into helium nuclei with a large release of energy, as they do on the sun. To set off such a reaction would require a very high temperature, but might not the enormously high temperature of the atomic bomb be just what was needed to explode hydrogen?

“And if hydrogen, what about hydrogen in sea water? Might not the explosion of the atomic bomb set off an explosion of the ocean itself? Nor was this all that Oppenheimer feared. The nitrogen in the air is also unstable, though in less degree. Might not it, too, be set off by an atomic explosion in the ­atmosphere?”

Buck: “The Earth would be ­vaporised?”

Compton: “Exactly. It would be the ultimate catastrophe. Better to accept the slavery of the Nazis than to run the chance of drawing the final curtain on mankind.”

Later in the interview, Compton related the criterion the scientists used to decide whether the explosion should proceed:

Compton: “If, after calculation it were proved that the chances were more than approximately three in one million that the Earth would be vaporised by the atomic explosion, he would not proceed with the project. Calculation proved the figures slightly less – and the project continued.”

Three in one million? That is roughly the same order of magnitude as the risk of dying in an accident when you take a scheduled airline flight, and that makes plenty of us nervous. Yet the stakes were unimaginably higher – the survival of humanity, and of untold future generations, maybe trillions of people.

Two people walk on a cleared path through the destruction resulting from the detonation of the first atomic bomb in Hiroshimaon August 6, 1945. Picture: AP
Two people walk on a cleared path through the destruction resulting from the detonation of the first atomic bomb in Hiroshimaon August 6, 1945. Picture: AP

Oppenheimer and his colleagues were grappling with an ethical dilemma of a kind never ­before faced by humanity – one in which the stakes were literally existential for our species.

Talk of existential risk is common in public debate these days, especially in relation to climate change. But even Michael Mann, one of the most zealous campaigners for climate action, concedes: “There is no evidence of climate change scenarios that would render human beings extinct.”

First-order existential dangers have always been around, from purely natural sources, mainly cosmic events such as the remote chance of a world-destroying collision with a large asteroid or comet.

But the odds of such an event occurring in any given year are minuscule. And, in any case, until very recently there was absolutely nothing that could be done to prevent them. The bomb conundrum was the first time anyone had to contemplate an anthropogenic risk of this magnitude – but it almost certainly will not be the last given the proliferation of potentially dangerous (as well as beneficial) new technologies, most recently artificial intelligence.

How do you decide, when facing a possibility such as this, however remote? Was Groves right to insist on zero risk (well, he said it would be nice). Moreover, as posed by Oppenheimer in the fictional dialogue, what level of reassurance can theory possibly provide?

How do you go from minuscule to absolute zero, given the propensity of even the most talented scientists to make modelling errors, amply demonstrated by the history of nuclear weapons development?

‘Unforgivable’: Western world developed ‘overkill’ nuclear arsenal after WWII

In 1976, Hans A. Bethe, who designed the implosion mechanism used in the Trinity and Nagasaki bombs, insisted the danger of an atmospheric chain reaction was, in fact, zero. But this is belied by the risk analysis done by the scientists at the time that was declassified in 1973 and that found the disaster scenario “highly unlikely” rather than impossible, noting “the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable”.

Here is the problem: scientists make mistakes, not just calculation errors but theoretical misconceptions in their modelling of what is going to occur, especially with experiments that create new and extreme conditions. The develop­ment of nuclear weapons provides two confirmations of this fallibility.

The first concerns the wartime German atomic bomb program, which was able to draw on some of the most talented physicists in the world, including quantum theory pioneer Werner Heisenberg, who was the scientific head of the program. Fear of a Nazi bomb was a huge motivating factor for Oppenheimer and other Manhattan Project scientists. Yet the Germans never came close to getting atomic weapons and by 1942 had all but abandoned any serious attempt to develop them. After the war, some of the German scientists, including Heisenberg, tried to claim they had deliberately sabotaged the German program.

Robert Oppenheimer, pictured with physicist scientist Albert Einstein, and his colleagues were grappling with an ethical dilemma of a kind never ­before faced by humanity.
Robert Oppenheimer, pictured with physicist scientist Albert Einstein, and his colleagues were grappling with an ethical dilemma of a kind never ­before faced by humanity.

This was refuted by secret recordings of the scientists while in British captivity at a country estate called Farm Hall. The truth was that Heisenberg made an error calculating the critical mass of enriched uranium needed to produce an explosion, based on an incorrect understanding of how the fission process spreads in a chain reaction.

Heisenberg estimated that 13 tonnes of enriched uranium would be needed, an impossible industrial challenge, making a practical bomb all but impossible. So confident was Heisenberg in his calculations that he initially insisted reports of the Hiroshima bombing were fake Allied propaganda. But he was wrong – the Los Alamos ­scientists correctly calculated the critical mass at about 60kg.

In the post-war period, there was a race to develop an even more powerful weapon, the hydrogen bomb, that uses the release of energy when two isotopes of hydrogen are fused together, producing an explosive yield dwarfing that of fission bombs. The first test in 1952 required the use of liquid hydrogen, making it impractical as a weapon. To address this, the scientists came up with a new design in which a hydrogen isotope (deuterium) was combined with lithium to form a solid, lithium deuteride, that could be carried in an aircraft.

This was detonated in the Castle Bravo test in the South Pacific in March 1954. Castle Bravo produced a far larger explosion than the scientists had calculated, terrifying observers as the fireball grew larger and larger, with a yield of 15 megatons compared with the estimate of five to six megatons.

So, what happened? There was an unexpected reaction involving an isotope of lithium that the scientists thought would be inert. Whoops!

Since then, the process of physics experimentation has continued, with new existential concerns being raised from time to time, as occurred during the lead-up to the opening of the Large Hadron Collider by CERN, the European Organisation for Nuclear Research, in October 2008.

British astrophysicist and former astronomer royal Martin Rees described the worries of some physicists about the LHC in his book On the Future: Prospects for Humanity (2021). They included the possible creation of stable black holes that might consume the Earth, or the appearance of theorised cosmological objects called strangelets that could transform the Earth into a hyperdense sphere 100m across, or a catastrophe that causes a rip in space-time.

Calculation showed acceptable chances that Earth wouldn’t be vaporised, and the project led by Oppenheimer continued. Picture: Keystone
Calculation showed acceptable chances that Earth wouldn’t be vaporised, and the project led by Oppenheimer continued. Picture: Keystone

The worries about the safety of the LHC were taken seriously enough for CERN to commission a detailed investigation of these concerns, which provided assurances that the fears are unwarranted.

But the conundrum remains. For example, the report drew on an estimate made by the Brookhaven National Laboratory in the US that put the probability of a strangelet catastrophe at about one in 50 million. Which doesn’t sound too bad – for anything other than a first-order existential threat that risked bringing down the curtain on the human story. Moreover, there remains the concern that the estimates may be wrong because of theoretical misconceptions, especially with physics experiments that create unprecedented extreme conditions. Rees states: “If our understanding is shaky – as it plainly is at the frontiers of physics – we can’t really assign a probability, or confidently assert that something is unlikely. It’s presumptuous to place confidence in any theories about what happens when atoms are smashed together with unprecedented energy.”

So we might say, with Groves in the movie dialogue, zero probability of an existential catastrophe would be nice but seems unobtainable. So, should we stop all risky experimentation that creates extreme or unprecedented conditions in physics and other fields, not least molecular biology with its risk of lethal pandemics?

That would mean denying humanity the benefits that powerful new technologies might bring, including an increased ability to avoid existential risks.

Take, for example, the danger posed by an asteroid capable of completely destroying human civilisation. According to NASA’s Centre for Near-Earth Object Studies, the chance of a civilisation-ending impact event in any given year is about one in 500,000. Very low probability, but potentially very high, even species-extinguishing, in its effect.

Until recently, there was nothing humanity could do to avoid such an eventuality. Modern technology – including nuclear weapons and rocketry – has changed that situation. According to a NASA study, “nuclear standoff explosions were likely to be the most effective method for diverting an approaching near-Earth object”.

ChatGPT and other large language model applications have accelerated existential fears about what happens if artificial general intelligence develops goals incompatible with human values.
ChatGPT and other large language model applications have accelerated existential fears about what happens if artificial general intelligence develops goals incompatible with human values.

The debate about existential risk has taken on a new urgency with the spectacular acceleration of progress on artificial intelligence during the past several years, a development that has well and truly ­entered the public consciousness with the release of ChatGPT and other large language model applications since November last year.

AI is expected to have all manner of impacts, for good or ill, but the existential fears have arisen from the sense that we could be much closer to artificial general intelligence, the ability to match or exceed human capabilities across a wide range of activities and knowledge domains, than previously thought.

What happens if an AGI emerges that is assigned, or develops on its own, goals incompatible with human values? What if such an AGI were to embark on a process of recursive self-improvement, rewriting its own code, producing an intelligence not just greater than any individual human but the collective intelligence of humanity, making the leap from intelligence to super-intelligence?

Our continued existence, or at least the terms on which we are allowed to continue, would be at the discretion of an entity beyond our comprehension that may have values very different to our own. Those in the field term this the “unaligned AI” problem, and some analysts of existential risk believe it is the greatest threat we face.

Until recently, this would have seemed like a science fiction scenario that might arise centuries, or at least many decades, in the future. Now, a large and growing preponderance of opinion among leading developers and specialists in the field is that this is a scenario we could face much earlier and need to be preparing for now.

These concerns have prompted calls for a pause, or even a permanent halt, in advanced AI developments. In May, a veritable who’s who of the AI world, including representatives of just about all of the major Western corporations working in the field (including Google-DeepMind, Microsoft, OpenAI, Meta, Anthropic) signed a one-sentence statement that reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

China, under the leadership of Xi Jinping, has embarked on an aggressive program to achieve AI supremacy by 2030. Picture: Zuma Press/WSJ
China, under the leadership of Xi Jinping, has embarked on an aggressive program to achieve AI supremacy by 2030. Picture: Zuma Press/WSJ

What a pity, then, that this challenge has coincided with the worst deterioration of interstate relations since the height of the Cold War as a coalition of dictatorships – dominated by the Communist Party regime in China – seeks to rewrite the rules underpinning the international world order.

In 2017, the Chinese Communist Party embarked on an aggressive program to achieve AI supremacy by 2030, a chilling prospect given the applications of this technology the regime has decided to prioritise, including developing systems of surveillance and control that could make the emergence of organised dissent virtually impossible – with the unfortunate Uighur people in Xinjiang serving as a testbed for the most intrusive developments. This enormously complicates the chances of a credible, verifiable international agreement to minimise the dangers from dangerous AI.

In 2021, a commission chaired by former Google chief executive Eric Schmidt warned that if the US unilaterally placed guardrails around AI it could surrender AI leadership.

Just as with the 1940s atomic bomb program, the impetus to develop a powerful and dangerous new technology is driven by the prospect that a malign opponent might steal the march, leading to a future world dominated by a technology-empowered totalitarian hegemon.

It’s a chilling prospect if those who argue artificial general intelligence could be imminent turn out to be right.

Peter Baldwin is a former Labor politician.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/inquirer/atomic-bombs-dangerous-new-technology-races-driven-by-fear-that-malign-opponents-will-get-there-first/news-story/e6aa830cbc67ac5f40e0171b4537a9a4