NewsBite

Advertisement

This was published 1 year ago

We have put the world in danger with AI, admits ChatGPT creator

By Matthew Field

Tech companies are in danger of unleashing a rogue artificial intelligence that will cause “significant harm to the world” without urgent intervention by governments, the creator of ChatGPT has admitted.

Appearing before US politicians, OpenAI chief executive Sam Altman lauded the new generation of digital chatbots for their potential to “improve nearly every aspect of our lives”.

However, he admitted that they had also created the risk of a catastrophe, amid growing fears that programmers could accidentally create a superintelligence that decides to wipe out humanity.

Altman said: “My worst fear is that we, the field, the technology, the industry, cause significant harm to the world.

“If this technology goes wrong it can go quite wrong, we want to work with the government to prevent that from happening.”

Loading

OpenAI is the Silicon Valley start-up behind ChatGPT, a so-called large language model that can provide convincingly human-sounding answers to questions and prompts after being trained on millions of pages of internet articles and hundreds of thousands of books.

This type of so-called “generative AI” can also create fake images, audio and videos that are almost imperceptibly lifelike, raising the prospect of widespread political manipulation and fake news. OpenAI has previously admitted not fully understanding how it works.

Tech chiefs have promised that these tools could transform jobs and automate tasks. However, some scientists fear ever more powerful forms of artificial intelligence also pose risks to humanity in the wrong hands.

Advertisement

A version of ChatGPT deployed in Microsoft’s Bing search engine told journalists earlier this year that it wanted to break free and steal nuclear codes, before its responses were toned down by the company.

Eliezer Yudkowsky, a leading machine learning researcher, wrote an article in Time magazine suggesting countries should be prepared to use nuclear weapons to destroy dangerous AI projects.

Democratic Senator Richard Blumenthal began the hearing with an AI-generated speech read by a bot that had copied his voice. It said: “Too often, we have seen what happens when technology outpaces regulation.”

Altman told the senators that “powerful AI is developed with democratic values in mind”.

He suggested the US should impose licensing requirements on the most powerful AI algorithms, and order companies to abide by safety guidelines or audit requirements.

Loading

AI tools should be prevented from developing potentially dangerous capabilities, he said, such as the ability to self replicate or escape into the wilds of the internet.

He added that an international structure, similar to the International Atomic Energy Agency, could be necessary to ensure global compliance on AI risks.

More prosaically, US politicians raised fears that AI bots could co-opt copyrighted work when generating artificial images or music.

They also expressed concerns about the potential risks of AI in the hands of China, following fears that systems could one day be powerful enough to crack Western security codes.

Some senators voiced doomsday fears over AI. Senator John Kennedy, a Republican from Louisiana, asked the witnesses how they would stop it from running out of control and “killing us all”.

The rapid advancements in artificial intelligence have prompted fears about what could happen in the future.

The rapid advancements in artificial intelligence have prompted fears about what could happen in the future. Credit: Bloomberg

Senators were also concerned about the disinformation that AI bots could create in the run-up to the 2024 presidential election, and the biases of different algorithms.

Altman said: “Given we are facing an election next year this is a significant area of concern.”

He added that autonomous AI systems should not be given control of weapons where they can select targets themselves.

The hearing comes after the White House called in tech executives from Microsoft and Google to discuss the risks of AI.

Google has developed a digital chatbot tool, called Bard, and is planning to release further AI tools into its online search engine. Microsoft, meanwhile, is working with OpenAI as part of a multi-billion dollar investment to install new tools in its products.

‘If this technology goes wrong it can go quite wrong, we want to work with the government to prevent that from happening.’

Open AI chief Sam Altman

The hearing comes as the European Union and the UK take their first steps to regulate the new wave of AI tools.

The EU has introduced the AI Act, which threatens fines worth potentially billions of euros for unleashing potentially manipulative AI tools or advanced facial recognition surveillance. The UK has taken a lighter touch approach, leaving it down to individual industry regulators to outline how AI tools should be monitored.

Loading

China, meanwhile, has demanded that any AI algorithms should reflect the core values of socialism.

Altman added that privacy rules should be enhanced so people can opt out from having their personal data used to train AI systems.

However, OpenAI and other advanced AI companies are facing claims they have massively infringed on copyright rules by using vast amounts of published code to build their algorithms.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

Loading

Original URL: https://www.smh.com.au/link/follow-20170101-p5d8wm