NewsBite

William Hague

We've a lot to fear over AI: the world must wake up to its scale

William Hague
AI-generated deepfake image of former president Donald Trump being arrested.
AI-generated deepfake image of former president Donald Trump being arrested.

In August 1939, Albert Einstein wrote to President Roosevelt to warn him that “the element uranium may be turned into a new and important source of energy” and that “extremely powerful bombs may thus be constructed”. Given that the first breakthrough towards doing this had been made in Nazi Germany, the United States set out with great urgency to develop atomic bombs, ultimately used against Japan in 1945.

Such was the unsurpassable power of nuclear weapons that, once the science behind them had been discovered, their development could not conceivably be stopped. A race had begun in which it was imperative to be ahead.

Albert Einstein urged the US to get ahead on the development of the nuclear bomb.
Albert Einstein urged the US to get ahead on the development of the nuclear bomb.

Last week, another letter was written about today’s equivalent of the dawn of nuclear science – the rise of artificial intelligence. This was a public letter from 1,100 researchers and experts, including Elon Musk, arguing that “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources”. In the absence of such planning, they called for a six-month pause in the training of AI systems such as ChatGPT-4, rather than see an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

Unlike Einstein, who was urging the US to get ahead, these distinguished authors want everyone to slow down, and in a completely rational world that is what we would do. But, very much like the 1940s, that is not going to happen. Is the US, having gone to great trouble to deny China the most advanced semiconductors necessary for cutting-edge AI, going to voluntarily slow itself down? Is China going to pause in its own urgent effort to compete? Putin observed six years ago that “whoever becomes leader in this sphere will rule the world”. We are now in a race that cannot be stopped.

Most people have not yet woken up to the scale, speed and implications of that race. We are used to the idea that technology progresses incrementally. Smartphones, for example, have changed our lives in countless ways. We know that due to Moore’s law (computing power doubles every two years) our phone has more such power than the Apollo spacecraft that went to the Moon in 1969. Each new model has a few improvements: a better camera or battery. It seems quite slow and steady.

Elon Musk was one of 1100 experts warning of the speed at which AI is advancing. Picture: AFP.
Elon Musk was one of 1100 experts warning of the speed at which AI is advancing. Picture: AFP.

Now we have to get used to capabilities that grow much, much faster, advancing radically in a matter of weeks. That is the real reason 1,100 experts have hit the panic button. Since the advent of deep learning by machines about ten years ago, the scale of “training compute” – think of this as the power of AI – has doubled every six months. If that continues, it will take five years, the length of a British parliament, for AI to become a thousand times more powerful. The stately world of making law and policy is about to be overtaken at great speed, as are many other aspects of life, work and what it means to be human when we are no longer the cleverest entity around.

The rise of AI is almost certainly one of the two main events of our lifetimes, alongside the acceleration of climate change. It will transform war and geopolitics, change hundreds of millions of jobs beyond recognition, and open up a new age in which the most successful humans will merge their thinking intimately with that of machines. Adapting to this will be an immense challenge for societies and political systems, although it is also an opportunity and – since this is not going to be stopped – an urgent responsibility.

Like the nuclear age heralded by Einstein, the age of AI combines the promise of extraordinary scientific advances with the risk of being an existential threat. It opens the way to medical advances beyond our dreams and might well provide the decisive breakthroughs in new forms of energy. It could be AI that works out how we can save ourselves and the planet from our destructive tendencies, something we are clearly struggling to work out for ourselves. On the other hand, no one has yet determined how to solve the problem of “alignment” between AI and human values, or which human values those would be. Without that, says the leading US researcher Eliezer Yudkowsky, “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else”.

The ChatGPT logo at an office in Washington. Picture: AFP.
The ChatGPT logo at an office in Washington. Picture: AFP.

Faced with something so promising, so dangerous and so unstoppable, what should we do about it in the UK? First, we have to ensure we, with allied nations, are among the leaders in this field. That will be a huge economic opportunity, but it is also a political and security imperative, for AI could become the perfect tool of surveillance, influence and control for dictatorships. The government has made a good start on this, with pounds 900 million for a new “exascale” supercomputer and the announcement of an expert taskforce to report to the prime minister. Last week, ministers published five principles to inform responsible development of AI, and a light-touch regulatory regime to avoid the more prescriptive approach being adopted in the EU.

This all makes sense as an effort to make us a home of innovation, but ministers should prepare already for the next stage. The implications of AI will rapidly become too all-embracing for existing regulators to cope with, and there will be a need for central oversight, while Whitehall needs a dramatic elevation in its expertise. As James Phillips, a former No 10 science adviser, has pointed out, we will need much greater sovereign AI capabilities than currently envisaged. This should be done whatever the cost. Within a few years it will seem ridiculous that we are spending pounds 100 billion on a railway line while being short of a few billion to be a world leader in supercomputing.

Before AI turns into AGI (artificial general intelligence) the UK has a second responsibility: to take the lead on seeking global agreements on the safe and responsible development of AI. Much of this would be common standards of values and transparency agreed with allies, but even China should agree never to let AI come near the control of nuclear weapons or the creation of dangerous pathogens. The letter from the experts will not stop the AI race, but it should lead to more work on future safety and in particular how to solve the alignment problem.

Last week, ministers said we should not fear AI. In reality, there is a lot to fear. But like an astronaut on a launch pad, we should feel fear and excitement at the same time. This rocket is lifting off, it will accelerate, and we all need to prepare now.

The Times

Read related topics:Vladimir Putin
William Hague
William HagueColumnist, The Times

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/world/the-times/weve-a-lot-to-fear-over-ai-the-world-must-wake-up-to-its-scale/news-story/f63ef52fb2075cab687b44339fb136b3