Batman said it. James Bond too. And Oscar Wilde declared it the sign of a thoroughly modern intellect: “Expect the unexpected”.
When it comes to technological change, there’s an important truth to this oxymoron. We thought, for example, that social media was going to unite us, and provide voice to those without power. And at the start it did. In the Arab Spring, it was a force for freedom and democracy. But quickly that force was turned to other, less noble ends. To spread misinformation, conspiracy theories and deep fakes. Social media didn’t bring us together. It drove us apart.
So what unexpected impacts will AI have? There is no shortage of strident voices warning of the dangers.
Geoffrey Hinton, one of the intellectual giants behind the current deep-learning revolution, recently and very publicly resigned from Google so he could speak openly about his concerns. But his fears – that super-intelligent AI threatens our very existence – are still somewhat distant. For all its impressive party tricks, ChatGPT can be surprisingly dumb. There are simple examples where, for instance, it fails to count up to five correctly.
I suspect intellectuals such as Hinton put too much emphasis on intelligence. The smartest people don’t tend to have the most power. Indeed, intelligence seems a bit of a disadvantage in politics.
There is, however, reason to listen to Hinton’s fears. There’s no scientific obstacle we know about that will prevent super-intelligent machines being built. And we can easily imagine the harms such machines could cause to humanity in the hands of bad human actors.
New chemical weapons. Tick. AI researchers last year tweaked an AI program for inventing new drugs so it came up with a potential new chemical weapon similar to VX, one of the most dangerous nerve agents ever invented.
Deepfakes starting a war. Tick. Gabon’s military attempted an ultimately unsuccessful coup in 2019 following what many believed was a deepfake video of the President looking unwell.
But these aren’t the only harms expected from AI. On Monday, Lorraine Finlay, Australia’s Human Rights Commissioner, warned how generative AI will be used to swamp democracy with untruths. Now there’s a lot of validity to her fears. Political parties are already running deepfake campaign adverts. And many people are surprised to discover the Pope has never worn a white puffer jacket. Images once seen cannot be unseen.
But all these are expected harms. My concern is that while we’re addressing the very real and expected problems Hinton and Finlay have warned about, we’ll miss something even bigger. In fact, I fear we’re missing what is the greatest heist in all of human history. It began with Big Tech’s usual playbook. It’s the same playbook used by drug dealers. Give the product away for free to get the user hooked on it. In this case, we’re quickly becoming dependent on AI tools such as ChatGPT. They’ll improve our productivity, reply to our emails, write advertising copy and do a thousand and one other routine tasks. And we soon won’t be able to imagine going back to a world without them.
But what exactly is the heist? It started with the theft of a large chunk of human knowledge. Large language models such as ChatGPT have ingested much of the internet, all of Wikipedia and all of the US patent database. Soon it will be much of our scientific knowledge. Meta built Galactica, a large language model trained on millions of scientific articles. Knowledge about current affairs? The latest models, such as Bing and Bard, have access to the world’s news. And let’s not forget all our cultural knowledge. Google has scanned more than 25 million books. As for our social knowledge, these models already incorporate huge amounts of social media.
In fact, we’re running out of digitised text on which to train these models. The tech companies have made it clear how they’ll address this. The next pivot is multimodal. Images, audio and video, as well as the text that accompanies this. All the world’s podcasts? Next surely on someone’s list. Songs. YouTube videos. Films. TV shows.
But it won’t stop there. Knowledge about our economy? Google already tracks around 70 per cent of credit and debit cards in the US. Knowledge about our geography? Google Earth is a start on this. Knowledge about us? Google and Apple already track our smartphones. And what do you think Amazon is planning to do with all that information from Alexa? Or Google with all that information from our FitBits?
Imagine then a future where these large AI models ingest all of this knowledge. All digital knowledge. All of science. All our knowledge about the planet. Our economies. Our culture. Our society. Our lives. This is Big Brother but not as Orwell imagined. Not a government, but a large tech company knowing more about us and the world than a human could possibly comprehend. And imagine these models use all this information to manipulate what we do and what we buy in ways we couldn’t begin to understand.
But perhaps the most beautiful part of this digital heist is that all of this knowledge is being acquired in broad daylight for free. Napster sounds like a rather minor and petty crime in comparison.
Professor Toby Walsh is chief scientist at the UNSW AI Institute