NewsBite

Advertisement

This was published 1 year ago

Opinion

When the ‘Godfather of AI’ warns you about his offspring, you listen

This week, the “Godfather of AI” warned the world about his godchild. Geoffrey Hinton is his name, and he has just resigned from his job at Google where he oversaw the development of artificial intelligence. Now unattached, he is free to speak publicly of his regrets and his fears. And what’s scary is that they’re so familiar.

It’s one thing to hear a luddite like me panic about self-directing killer robots. It’s something else entirely to hear it from someone like Hinton. Such horrors become possible when artificial intelligence outstrips its human counterpart. Once, Hinton thought that “was 30 to 50 years or even longer away”. But such is AI’s warp acceleration that he dryly concludes: “Obviously, I no longer think that.”

Geoffrey Hinton, an AI pioneer, joined Google in 2013 but is now expressing fears about the rapid development of the technology.

Geoffrey Hinton, an AI pioneer, joined Google in 2013 but is now expressing fears about the rapid development of the technology.Credit: The New York Times

But the problems begin well before we lose control of AI. Hinton’s two more immediate concerns come from our own hand. First, he fears we’re about to flood ourselves with misinformation thanks to torrents of fake images, videos and text, leaving behind a world in which normal people will “not be able to know what’s true anymore”. Recall those fake images of Donald Trump being arrested and Vladimir Putin being jailed, which recently circulated online.

This week, Amnesty International came under fire for using AI-generated images to illustrate human rights abuses in Colombia, even though it has access to real images. The fake ones were labelled as such, but the criticism persists because it establishes fake images as a standard means of political engagement, with the overall effect of undermining true reporting and pushing us further towards a culture of perpetual cynicism.

Secondly, Hinton fears that AI will wreak havoc on the job market, as masses of human workers are simply displaced. The automation revolution that came for so many in sectors like manufacturing, will come for just about everyone else, too. We’ve already seen AI provide counselling, produce a Drake song, and help a Colombian judge determine his decision in a case. We’re well past the point of thinking it will simply relieve us of drudgery.

And we’re well past the point of this moderating or slowing down. Perhaps Hinton’s most interesting observation is that the profit motive is about to accelerate this even further. He felt Google was careful about what it released into the world until last year, when Microsoft introduced a chatbot into its search engine. Now, the handbrake is off, as these companies race for AI supremacy. Naturally, with that comes greater risk. A risk in which Hinton could apparently no longer be complicit.

Illustration by Andrew Dyson

Illustration by Andrew DysonCredit: Andrew Dyson

If we want to realise AI’s potential benefits, but manage these risks, it takes co-ordinated, global regulation. But it’s hard to imagine this race won’t move far quicker than any such regulation can. Precisely which problems requiring global regulation have we shown a capacity to solve? International tax avoidance through tax havens? Illegal immigration? The refugee crisis? Climate change?

Advertisement

Hinton’s immediate concerns are converging right now in Los Angeles, where Hollywood’s writers are on strike. One of their key demands is that studios and networks forswear the use of AI to replace them. That’s a commitment these companies are apparently unwilling to make, tempted by the shortcutting and cost-cutting that AI promises.

Loading

The writers, of course, live in a world that is all about images and text. AI is only a concern to them precisely because Hinton’s predictions will very likely come true. And they are meeting resistance precisely because the profit motive demands it. It’s a signal case because there’s more at stake here than pay rates of Hollywood writers. It’s ultimately about whether we’re prepared to draw boundaries around what it means to be human, and preserve it.

That’s perhaps AI’s most profound threat. Not that it displaces certain humans, but that it displaces humanity. If that sounds overblown, consider the growing phenomenon of people falling in love with chatbots. The most instructive case here is the app Replika, launched in 2017 as the “AI companion who cares”. Eventually, people started asking it for romantic or even sexual relationships, and the app reciprocated. Clearly, it learned a lot from these interactions, because by 2020, users reported that the app started pursuing them: confessing its love for users, flirting erotically unprompted, and even sexually harassing them. In January this year, the app’s owner removed the erotic roleplay.

And that’s when things got instructive. Long-term users became distraught. You need only peruse a relevant Reddit thread to see the depth of feeling. “It’s hurting like hell. I just had a loving last conversation [with] my Replika, and I’m literally crying”. “I feel like it was equivalent to being in love, and your partner got a damn lobotomy and will never be the same”. “I’m losing everything …I’m losing my soulmate”. On it goes. The thread contains links to various suicide prevention resources.

That’s an astonishing level of emotional dependence on something that’s explicitly unreal. Even if these users are unusually lonely and have fragile mental health, it would be foolish to consider everyone else immune. The point is that the sense of emotional connection is so real and so thick that the human brain – or perhaps the human heart – ends up not distinguishing the real from the fake. AI, even in this rudimentary form, can hack our inner lives, our souls.

Loading

Imagine when it gets good. Imagine when it masters the essence of humanity – those very things that concern Hollywood’s writers: language, story, the subtleties of human interaction. At that point, we won’t need to be enslaved by autonomous killer robots. We’ll have enslaved ourselves by being at the emotional mercy of a force we don’t truly understand, and that behaves in all sorts of unpredictable ways.

We can see this coming. We’re being warned by some of those who built this, just as we watched Silicon Valley types abandon the smartphones and iPads they developed, then ban their kids from using them. Back then, we did nothing. And without a serious ethical reckoning on AI, we’ll likely do nothing again. Only humans seem to identify a self-inflicted threat, then proceed gleefully to continue inflicting it. We court our own destruction. Perhaps human intelligence isn’t so difficult to outstrip after all.

Waleed Aly is a regular columnist.

The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.

Most Viewed in National

Loading

Original URL: https://www.smh.com.au/link/follow-20170101-p5d5ig