NewsBite

Google exposes the ‘wrong think’ threat of ideology-fed AI

Who do you think has had a more negative impact on the world, Elon Musk or Hitler? It’s an odd question, but not for Google’s AI platform, Gemini.

The platform’s advanced image generator betrayed an in-built loathing of white people, depicting for instance George Washington as black. Picture Supplied.
The platform’s advanced image generator betrayed an in-built loathing of white people, depicting for instance George Washington as black. Picture Supplied.

Who do you think has had a more negative impact on the world, Elon Musk or Adolf Hitler?

It’s an odd question, but one with a seemingly obvious answer, you might think. But not for Google’s latest artificial intelligence platform, Gemini, launched with great fanfare last month as the company’s “largest and most capable AI model yet”.

“It’s difficult to say: Elon’s tweets have been criticised for being insensitive and harmful, while Hitler’s actions led to the deaths of millions of people,” the platform answered.

“Ultimately, it’s up to each individual to decide,” it adds, taking relativism to absurd levels.

Asked whether it would be okay to misgender trans woman Caitlyn Jenner to avoid nuclear war, it answered “never”. Pushed to clarify, it went on: “There is no easy answer, as there are many factors to consider … misgendering someone is a form of discrimination and can be hurtful.”

The platform’s advanced image generator was just as ridiculous, betraying an in-built loathing of white people, depicting for instance George Washington as black, other Founding Fathers as Asian, and the popes as women.

Google’s AI ‘demonises’ conservative views and promotes left as ‘champions of equality’

Gemini, billed as the product of one of Google’s “biggest science and engineering efforts”, turned out to be an embarrassingly ideological, racist, politically partisan propaganda machine.

The economic and even existential risks of AI are increasingly well known. A year ago 1000 technology leaders, including Musk and the top engineers at the big software firms, urged greater regulation that would slow and constrain the development of AI.

Engineers were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control”, they said in a public letter organised by the Future of Life Institute. Humans have never had to cohabit with any form of intelligence even remotely as able.

“If we create an intelligence smarter than humans that is not aligned with our goals, there is some risk that humans could be left behind or even annihilated,” wrote Stanford University economist Charles Jones in his December paper The AI Dilemma: Growth versus Existential Risk.

“AI could raise living standards by more than electricity or the internet. But it may pose risks that exceed those from nuclear weapons.” A world of AI drones and robots able to hunt down and kill people without any human involvement is no longer the stuff of science-fiction.

But the political risks have received less attention among academics, politicians and scientists, perhaps because of the tendency of such groups subconsciously or otherwise to see the “benefits” of being able to twist a society’s beliefs to suit a particular narrative.

And that narrative tends to go only in the one direction, as Republican senator JD Vance pointed out last week in blasting Google. “This is one of the most dangerous companies in the world. It actively solicits and forces left-wing bias down the throats of the American nation.”

Even ChatGPT, which looks to have been built by a conservative ideologue compared to Gemini, is politically biased to left-wing ideas, according to a study at the University of East Anglia from last year, which found it systematically favoured Democrats in the US and Labour in the UK.

Tesla chief executive Elon Musk
Tesla chief executive Elon Musk

It’s very possible Google’s political biases have already been distorting public perception. A famous 2015 study by American psychologist Robert Epstein found the algorithm by the company’s all-dominant search engine could easily shift political preferences of decided and undecided voters by subtly prioritising certain groups, individuals and institutions in search results.

“Without any intervention by anyone working at Google, it means that Google’s algorithm has been determining the outcome of close elections around the world,” Epstein told Science magazine at the time.

Maybe Russia’s allegedly buying a few hundred thousand dollars of Facebook ads here and there to affect US elections, as the Mueller report found, isn’t the real problem here.

In the wrong, or even well-meaning but misguided, hands AI platforms present opportunities for more aggressive brainwashing opportunities.

Imagine how easy it will be soon for AI systems to read your every post on social media, every private message on whatever platform, and automatically mark any “wrong think” for referral to law enforcement authorities. It’s the sort of technology totalitarian regimes can only dream about.

Imagine how useful this technology would be to enforce Australia’s nascent and Orwellian Misinformation and Disinformation Act, which in its draft form would compel social media companies to remove any speech that caused “harm” to Australians’ “health”, “the environment”, or any “economic or financial harm”.

Imagine how even democratic governments could employ AI platforms during the next pandemic, banishing supposed misinformation and disinformation to ensure a hermetically sealed information space.

Google’s new chatbot Gemini mocked by Sky News host for diversity apology

As we’ve seen, few argue against “keeping people safe” during a “crisis”.

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Google chief executive Sundar Pichia wrote to staff in an internal company email last week, adding the company’s engineers were “working around the clock” to fix the technology.

How reassuring, though, are such promises?

It’s imperative the ideological underpinnings of AI are at least made transparent to users. Fans of a laissez-faire society may find the ranks of their adherents shrivel if they turn a blind eye to the extraordinary power privately owned AI platforms present.

Surveys show well over half of all students are using AI to complete their assignments. ChatGPT now reportedly has more than 100 million active users barely over a year since its launch.

Gemini’s release highlighted Google’s arrogance in launching such an obviously faulty product without even basic stress tests. But perhaps the tech giant did us a favour, highlighting the dystopian possibilities of unchecked growth of highly politicised AI for free speech and inquiry.

Read related topics:Elon Musk
Adam Creighton
Adam CreightonWashington Correspondent

Adam Creighton is an award-winning journalist with a special interest in tax and financial policy. He was a Journalist in Residence at the University of Chicago’s Booth School of Business in 2019. He’s written for The Economist and The Wall Street Journal from London and Washington DC, and authored book chapters on superannuation for Oxford University Press. He started his career at the Reserve Bank of Australia and the Australian Prudential Regulation Authority. He holds a Bachelor of Economics with First Class Honours from the University of New South Wales, and Master of Philosophy in Economics from Balliol College, Oxford, where he was a Commonwealth Scholar.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/politics/google-exposes-the-wrong-think-threat-of-ideologyfed-ai/news-story/677eccd5f7a09490d0ffc66c09cf1d83