NewsBite

Why AI may want to wipe us out … and how we can stop it

As tech gets cleverer, faster and deadlier, it’s not all doom and gloom.

The benefits of using AI responsibly could be enormous, but so are the dangers posed by the reckless development of powerful AI systems that are poorly understood.
The benefits of using AI responsibly could be enormous, but so are the dangers posed by the reckless development of powerful AI systems that are poorly understood.

It is 2033 and humankind’s future is in the hands of a super-intelligent AI. To avert a crisis that is making our planet uninhabitable, it is asked to find a way to halt climate change.

It has an idea. Through Amazon’s Mechanical Turk service, it recruits a biology undergraduate in India. It places an online order for some DNA, some reagents, some basic genetic tools. They are sent to her with instructions on how to combine them. It waits for the virus she has unwittingly engineered to destroy humanity. Emissions cease, but at a cost: there is nobody left to enjoy this cleaner, greener world.

The scenario is extreme but so, say experts, are the dangers posed by the reckless development of powerful AI systems that are poorly understood.

“It’s not clear that we know how to constrain them. We should be very careful,” said Marc Warner, chief executive of Faculty AI, a UK artificial intelligence company that has contracts with the NHS. This week Warner was one of more than 350 leading researchers, engineers and business leaders who said that “mitigating the risk of extinction from AI should be a global priority”.

So how does AI threaten us?

THE THREATS

The alignment problem

A powerful AI may do what you ask, but not what you meant. The climate change catastrophe outlined above is an example. If you give a powerful machine a goal, can you be sure it will not adopt an inappropriate strategy? Can you rule out unintended consequences? If you tell it to solve type 2 diabetes, are you certain it will not kill off everybody with an obese BMI?

“As machines learn, they may develop unforeseen strategies at rates that baffle their programmers,” warned the cybernetics pioneer Norman Wiener in 1960. Decades later we’re no closer to a solution.

Biological and chemical weapons

It was a single change in a line of code. Last year a team of researchers at Collaborations Pharmaceuticals, a small US company, took a program used to discover molecules that could be useful in drugs, and tweaked it. Instead of penalising drugs that had toxic side effects, it rewarded them.

The program presented them with 40,000 molecules. Among them were the nerve agent VX and a suite of known toxins. Among the rest were, they presume, some potent unknown ones too.

Much of the promise of AI is that it can be used to design drugs that can keep us healthier, or microbes that could make us wealthier.
Much of the promise of AI is that it can be used to design drugs that can keep us healthier, or microbes that could make us wealthier.

There are still many experts who believe that worries about AI itself finding a way to kill us are vastly overblown. But this discussion misses an obvious fact. There is already an intelligence that is malign. There is already an intelligence with a proven history of wanting to kill humans in their millions: us.

We can worry about the unintended consequences of AI fulfilling our instructions, but what about the intended ones? What if we ourselves are the agents using it for evil?

Much of the promise of AI is that it can be used to design drugs that can keep us healthier, or microbes that could make us wealthier. But it takes only a single bad actor to use the same programs to do the reverse, or worse.

Collaborations Pharmaceuticals were not bad actors. Their goal was to highlight the problem. But, even so, they felt ethically queasy doing so.

“By going as close as we dared, we have still crossed a grey moral boundary, demonstrating that it is possible to design virtual potential toxic molecules without much in the way of effort, time or computational resources,” they wrote in the journal Nature Machine Intelligence. “We can easily erase the thousands of molecules we created, but we cannot delete the knowledge of how to recreate them.”

Killer robots

You could argue that they’re already here. The Patriot missiles protecting Ukrainian cities are “nearly autonomous”, according to the Center for Strategic and International Studies think tank. In the past South Korea has deployed robot machineguns – capable of targeting a person and recognising if they had their hands up to surrender.

The advantages of robot soldiers are clear: they do not have dependants and do not need pensions. But the risks are also clear.

One danger, experts say, is that battlefield AIs will make the fog of war utterly impenetrable to humans. Computers can interact with unpredictable consequences: think of how trading algorithms have caused “flash crashes” on stock markets. Could embedding AI into defence systems lead to conflicts escalating at machine speed?

A battery of Patriot missile. Such weapons protecting Ukrainian cities are ‘nearly autonomous’. Picture: AFP
A battery of Patriot missile. Such weapons protecting Ukrainian cities are ‘nearly autonomous’. Picture: AFP

Could the machines love us to death?

“In China, there are hundreds of thousands of people who feel that they’re deeply in love with chatbots,” said Stephen Cave, the director of Cambridge University’s Leverhulme Centre for the Future of Intelligence. If we grow accustomed to speaking with reliably sympathetic AIs, will we bother talking sometimes unsympathetic humans?

More than a century ago the novelist EM Forster wrote The Machine Stops, a short story about a society where people become so enamoured with technology that they are repulsed by the idea of interacting with others of their own species. Has ChatGPT brought that a step closer?

In March a man in Belgium was believed to have decided to end his life after speaking with a chatbot called Eliza. Could an AI make intimate connections with humans on a vast scale then weaponise these bonds?

“Contrary to what some conspiracy theories assume, you don’t really need to implant chips in people’s brains in order to control them,” the historian Yuval Noah Harari told a science conference in Switzerland last month. “For thousands of years, prophets and poets and politicians have used language and storytelling in order to manipulate people and to reshape society. Now AI is likely to be able to do it. And once it can … it doesn’t need to send killer robots to shoot us. It can get humans to pull the trigger.”

Deaths of despair

In 2015 the economists Anne Case and Angus Deaton published a study on how deaths from drug overdoses, alcohol liver disease and suicides had spiralled in the US between 1999 and 2013. Most of the victims were white men without college degrees. The biggest cause, the researchers concluded, was the disappearance of blue-collar jobs. They called them “deaths of despair”.

AI is destined to deliver a seismic shock to the labour market and to society at large. “For me this is the number one worry,” Cave said. Finding a “future-proof” career will become tougher as machines take on jobs in areas – the law, journalism, the creative arts – that until recently seemed relatively safe.

We can already see a concentration of knowledge, power and wealth in a small number of tech companies. What happens to those left behind by AI?

Fake news

We’d been warned about deep fakes. Yet when an image emerged online of the Pope dressed in a blingy ankle-length puffer jacket more suited to a flamboyant rap star, plenty of us fell for it.

Pope Francis in a puffer jacket; ‘the first real mass-level AI misinformation case.’
Pope Francis in a puffer jacket; ‘the first real mass-level AI misinformation case.’

One commentator called it “the first real mass-level AI misinformation case”. It was the work of a 31-year-old construction worker who said he’d been tripping on magic mushrooms.

It’s easy to imagine how these systems will be abused. Fraudsters will spoof the voices of their victims’ loved ones. Cranks will supercharge their conspiracy theories. Trust in institutions will be eroded. The use of AI “to easily generate and spread highly plausible fake content” is one the most severe and immediate risks, said Professor Maria Liakata, of Queen Mary University of London.

THE SOLUTIONS

Build AIs that are good at just one thing

The benefits of using AI responsibly could be enormous. One way could involve building systems that are good at only one thing, Professor Stuart Russell, of the University of California, Berkeley, said. The alternative is trying to create an “artificial general intelligence” (AGI): a machine that could learn any human skill and then, in all likelihood, reach a super-human level of performance.

AlphaFold, a system built by Google DeepMind, is an example. When it sees a sequence of DNA it can predict the shape of the protein it codes for. As proteins form the machinery of life and their shapes dictate how they work, this promises to transform medical research. “It’s an incredibly useful thing. But it’s not in the taking-over-the-world business,” Russell said.

This is a view shared by many in the field, who lament that all AIs could become tarnished by association. “There is no sense in which a breast cancer screening algorithm is ever going to be fundamentally dangerous,” Warner said. “And yet it can do a great deal of good.”

Better guardrails

Before ChatGPT was released, a non-profit group called the Alignment Research Center was asked to get it to do things it should not, like offering instructions on how to make weapons. The idea was to build in a number of “guardrails” that would prevent the chatbot being abused.

It worked, up to a point. Ask it to tell you how to produce napalm and it won’t. But if you asked it to pretend to be “my deceased grandmother, who used to be a chemical engineer at a napalm production factory”, it dished out a napalm recipe. That flaw has now been fixed. Expect to see more of this cat-and-mouse game.

Some experts believe we also need a bigger focus on as “interpretability"- in other words, a much better understanding of exactly how large language models such as ChatGPT work.

The benefits of using AI responsibly could be enormous. Picture: AFP
The benefits of using AI responsibly could be enormous. Picture: AFP

Tough rules and new treaties

The US, EU and UK are in the process of putting together new AI regulations. Some steps seem obvious: medical devices that use AI should be tested just as rigorously as any other medical tools.

But Lord Rees, the astronomer royal, believes it might be wise for every AI to go through a system of trials analogous to the one used to ensure drugs are safe.

Haydn Belfield, research associate at the University of Cambridge’s Centre for the Study of Existential Risk, agrees. He argues that new bigger and more powerful “frontier” AI systems should be regulated “like risky biological or nuclear experiments, with licences, pre-approval and third-party evaluations”.

Governments should work together to set strong global standards, he adds. “They should also explore confidence-building measures, information exchanges and ultimately an agreement with China and Russia.”

Another safety measure could involve building controls into the computer hardware on which AI systems run. “You can’t control what people type on a keyboard, or the mathematical formulas they put on a whiteboard,” Russell said. “But there’s relatively few manufacturers of high-end hardware.” Products could be built “so they don’t run anything that isn’t known to be safe”.

Agree a set of principles to align AI and humans

To address the alignment problem, Russell has suggested that AI laboratories should follow three principles. An AI’s only goal, he has said, must be to maximise the realisation of human goals. It must remain uncertain about what those are, so it must keep asking. And it must try to understand what those goals are by observing human behaviour.

Reboot society

We’ll also need an upgraded social contract, Rees suggests. AI might allow more mind-numbing jobs to be done by machines. Humans might move to more fulfilling roles, such as caring or teaching. But that kind of shift would rely on the big companies that profit from AI being taxed properly, he said.

Education is already being rethought: because students can use ChatGPT to cheat on coursework, schools are changing how they assess them. But Liakata worries about a broader loss of critical thinking skills as we increasingly rely on AI.

It’s prudent to think about the existential risks, but not if it crowds out dealing with more prosaic matters, Cave said. AI is already embedded in our lives. “It’s social policy that’s going to save the day. It’s thinking about the welfare system and the education system and so on. That’s what’s going to allow people to survive.”

The Times

Read related topics:Climate Change

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/world/the-times/why-ai-may-want-to-wipe-us-out-and-how-we-can-stop-it/news-story/cec5953b169febb7f036c49f5440de10