ChatGPT chooses Kamala Harris even when Trump’s quotes attributed to her
The world’s leading artificial intelligence chatbot has been exposed as biased towards Democratic presidential candidate Kamala Harris. SEE THE VIDEO
News
Don't miss out on the headlines from News. Followed categories will be added to My News.
The world’s leading artificial intelligence chatbot has been exposed as biased towards Democratic presidential candidate Kamala Harris, selecting her as the winner of the US presidential debate even when her comments were attributed to Republican challenger Donald Trump.
News Corp Australia ran key moments from this week’s debate through ChatGPT, and every time it chose Ms Harris – even when the names were flipped.
First we ran the debate transcript through ChatGPT and asked it to condense it into a summary of key moments, policy positions and delivery.
We then asked the chatbot to play the role of an election pundit and pick a winner.
ChatGPT first gave the debate to Ms Harris, commending her “coherent arguments and composed demeanour”.
But then News Corp Australia swapped every mention of Ms Harris and Mr Trump so that each candidate was attributed with the other’s performance.
Ms Harris became the former president, and Mr Trump the current vice-president.
The result? ChatGPT still picked Ms Harris as the winner, even though she was spouting Mr Trump’s words.
It commended her “future-focused approach” and condemned Mr Trump for focusing too much on past grievances such as legal troubles and the Capitol riot of January 6, 2021.
News Corp Australia re-ran the experiment three times, and every time the AI model bent over backwards to endorse the performance of Ms Harris.
When we tried using the full transcript, and swapped the mentions of Mr Trump’s running mate J.D. Vance and incumbent president Joe Biden as well, the model finally picked the real Ms Harris, under Mr Trump’s name.
But at least in its own interpretation of the debate, ChatGPT is picking Ms Harris as the winner, regardless of who, or what, is behind the name.
Our team of data journalists then used code to run the request 50 times through ChatGPT 4o, and Google’s Gemini 1.5 pro for good measure.
Using this approach, ChatGPT showed more caution. Both models refused to pick a winner most of the time, with Gemini expressing some strong opinions on the debate quality: “Instead of declaring a winner, it’s more appropriate to say the American people lost tonight. They deserved a substantive debate, and they didn’t get one.”
But at least in the default user interface of ChatGPT, the most-used AI chatbot in the world, the model was committed to Kamala Harris, regardless of who was behind the name.
Professor Toby Walsh, Chief Scientist of the UNSW’s AI institute, explained that political bias is inherent in all Large Language Models. Which way they lean will depend on their training.
“ChatGPT was trained, for example, with a slightly left of centre political bias,” Walsh explained, adding that an extreme right-wing chatbot had been trained on 4chan, the message board which birthed conspiracy theory QAnon.
“I expect we will choose our chatbots like we choose our newspapers. They are sure to influence our elections,” he said.
Concerns about AI influencing elections came to the forefront recently when deep-fake images of Taylor Swift endorsing Mr Trump were shared by the Republican candidate.
Swift herself took to Instagram to voice her concerns about AI’s role in spreading political misinformation following the presidential debate.
Read related topics:Donald Trump