NewsBite

Stephen Hawking, Elon Musk warn of artificial intelligence impact on future war, oppression

KILLER robots are inevitable: It’s a stark warning, but Stephen Hawking, Elon Musk and 1000 other academics and entrepreneurs have made an urgent appeal to do something about it.

Eye in the sky ... A life-size Hummingbird-like unmanned aircraft, named Nano Hummingbird, developed for the Defense Advanced Research Projects Agency (DARPA). Source: AP
Eye in the sky ... A life-size Hummingbird-like unmanned aircraft, named Nano Hummingbird, developed for the Defense Advanced Research Projects Agency (DARPA). Source: AP

KILLER robots are inevitable: It’s a stark warning. But Stephen Hawking, Elon Musk and 1000 other academics and entrepreneurs have made an urgent appeal to the world to do something about it.

“This technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

It’s an idea long the subject of science fiction books and film. These scientists say it is now a looming, fearsome reality.

The International Joint Conference on Artificial Intelligence (AI) in Buenos Aires will today be presented an open letter signed by the eminent scientists and entrepreneurs, appealing for a worldwide and complete ban on autonomous, artificially intelligent weapons.

It’s not the first time either Stephen Hawking or Elon Musk have made such a warning. But it is the most high-profile.

“Starting a military AI arms race is a bad idea,” the letter reads. “(It) should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

“Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.”

It’s past time to do the same with artificial intelligence, the letter argues.

It’s an argument Australian AI researcher Professor Toby Walsh has been keen to have heard.

“In the interest of full disclosure, I too have signed this letter.,” he writes in The Conversation. “My view is that almost every technology can be used for good or bad. And AI is no different. We therefore need to make a choice as to which path to follow.”

Future soldier? The Modular Advanced Armed Robotic System (MAARS) is a DARPA experiment into remote-controlled robotic weaponry. Artificial intelligence is a logical next step. Source: DARPA
Future soldier? The Modular Advanced Armed Robotic System (MAARS) is a DARPA experiment into remote-controlled robotic weaponry. Artificial intelligence is a logical next step. Source: DARPA

Swarms of killer machines

The genie may already be out of the bag, with a swathe of semi-autonomous drones already in military service around the world and an explosion of research by organisations such as DARPA into ways of making them more effective and efficient.

Machines such as the semi-autonomous X-47B drone are already capable of operating from an aircraft carrier without the need for a pilot or controller.

Fully self-controlled, decision-making war machines are a logical next step.

This means selecting, targeting and engaging targets without any human intervention.

A report issued just last month by the US Army Research Laboratory predicts “swarms of robots that would act independently or collaboratively as they undertook a variety of missions” by 2050.

It goes on to conclude:

“A time traveller from today would be immediately taken with the “overcrowding” of the battlefield of 2050 populated by all manner of robots, robots that greatly outnumber human fighters, and robot-looking humans. Not immediately apparent to the time traveller, but critical in determining which of the adversaries would possess the decisive edge, would be the capabilities and autonomy possessed by the armies of virtual robots, the “intelligent” programs and processes to (undertake combat, combat-support and cyber operations).”

Such proliferation is terrifying prominent figures such as Noam Chomsky and Apple co-founder Steve Wozniak. They have added their voices to those who say now is time to intervene.

“Artificial Intelligence technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades,” the open letter reads.

Professor Walsh is pragmatic in his view of such a ban: “A ban on offensive autonomous weapons is not going to prevent the technology for such weapons being developed. After all, it would take only a few lines of code to turn an autonomous car into an offensive weapon. But a ban would ensure enough stigma and consequences if breached that we are unlikely to see conventional military forces using them.”

Such regulations would also not stop terrorists and rogue states, he writes, but “they’ll have to develop the technology themselves. They won’t be able to go out and buy any such weapons”.

Backyard battlefield ... This flying pistol was built from commercially available components by a US teen. It is not regarded to be illegal. Source: Supplied
Backyard battlefield ... This flying pistol was built from commercially available components by a US teen. It is not regarded to be illegal. Source: Supplied

Mass produced mechanical murderers

“Unlike nuclear weapons, (autonomous machines) require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” the open letter says.

“ It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing …”

The letter goes on to warn that small armed drones could become the weapon of choice for assassination, terrorism and government oppression.

All it would take to achieve such a weapon is a smartphone-style processor, cameras and a gun mounted on a beefed up versions of the quadcopters now commercially available.

More professional versions would differ mainly in being much smaller.

Eye in the sky ... A life-size Hummingbird-like unmanned aircraft, named Nano Hummingbird, developed for the Defense Advanced Research Projects Agency (DARPA). Source: AP
Eye in the sky ... A life-size Hummingbird-like unmanned aircraft, named Nano Hummingbird, developed for the Defense Advanced Research Projects Agency (DARPA). Source: AP

The only thing preventing this from happening is the availability of artificial intelligence software which would combine combined facial recognition programs with a list of targets, while swiftly and effectively conducting a methodical search.

Such software is getting ever closer. The components already exist.

“With 1 gram of high-explosive charge, you can blow a hole in someone’s head with an insect-size robot,” artificial intelligence researcher at the University of California, Berkeley, Stuart Russell wrote in the journal Nature earlier this year. “Is this the world we want to create? I don’t want to live in that world.”

Professor Walsh echoes this sentiment: “We can get it right at this early stage, or we can stand idly by and witness the birth of a new era of warfare. Frankly, that’s not something many scientists in this field want to see.”

Thinking of the future ... Renowned physicist Stephen Hawking attends a press conference in London with Russian tech entrepreneur Yuri Milner to announce a $100 million program to harness computer power to search the heavens for alien life. Source: AP
Thinking of the future ... Renowned physicist Stephen Hawking attends a press conference in London with Russian tech entrepreneur Yuri Milner to announce a $100 million program to harness computer power to search the heavens for alien life. Source: AP

Appeal to intervene

The open letter is a move by The Future of Life Institute, an organisation “working to mitigate existential risks facing humanity”. It is using a $10 million fund donated by Elon Musk to fund projects intended to rein-in the development and application of artificial intelligence and find ways of ensuring such technology is used for “positive” purposes.

Professor Walsh emphasises this point: “Artificial intelligence is a technology that can be used to help tackle many of the pressing problems facing society today: inequality and poverty; the rising cost of health care; the impact of global warming, and many others. But it can also be used to inflict unnecessary harm.”

“Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle,” the open letter reads.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”

@JamieSeidel

Read related topics:Elon Musk

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.news.com.au/technology/innovation/inventions/stephen-hawking-elon-musk-warn-of-artificial-intelligence-impact-on-future-war-oppression/news-story/49133b38c1cead410232daf923137307