Guide to AI election: Microsoft says it’s up to humans to verify content generated by artificial intelligence
In what is one of the biggest election years, Microsoft executive Divya Kumar says humans are key to ensuring the content generated by the company’s Copilot platform is accurate.
A Microsoft executive says it’s up to users to verify content generated on its AI Copilot to stamp out misinformation in what will be one of the biggest election years in history.
Concerns are growing that 2024 could become the year that AI ‘stole’ an election, following the widespread adoption of the technology since the launch of ChatGPT 18 months ago.
In an effort to allay fears, ChatGPT owner OpenAI has outlined limits on using its tools for political use ahead of polls in the US, UK, EU, India and other democracies.
“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” OpenAI said in a statement.
“We’re still working to understand how effective our tools might be for personalised persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying.”
But Microsoft global head of marketing for search & AI, Divya Kumar, said humans were key to ensuring the content generated by the company’s Copilot platform was accurate.
“That’s precisely why we have citations built in because we do think there is a huge responsibility on the person using the tool, regardless of the tool being able to do the verification,” Ms Kumar told The Australian.
“We wanted to make sure that the sources are identified, especially when you’re doing web search. A lot of it comes down to the question you’re asking and the level of refinement you’re able to do, or willing to do, as part of any search you are doing.
“The reason why we’ve called the Microsoft Copilot is because we think the human is the pilot. Without the pilot, the copilot isn’t really as effective.”
To this end, Ms Kumar’s colleague Collete Stallbaumer, general manager of Microsoft 365 and future of work, has described generative AI – the ability to create a raft of content via simple verbal prompts – as “usefully wrong”. “It’s going to get you to good, so much faster, and then the human enters and decides how to get that from good to great,” Ms Stallbaumer told The Australian last year.
Companies rather than governments – with the US and Australia mindful of stifling innovation – have been taking the lead in combating misinformation. Fox Corporation launched a tool last week that uses blockchain technology to verify the authenticity of news content.
The Albanese government is yet to take action, instead flagging its intention to strengthen existing laws in areas that will “help to address known harms with AI”.
“This includes the implementation of privacy law reforms, a review of the Online Safety Act 2021, and introduction of new laws relating to misinformation and disinformation,” the government said this week in its interim response into the safe and responsible use of AI.
It comes as Microsoft launched a more advanced version of its Copilot – which can allow people and businesses to effectively create their own version of ChatGPT – for $US20 a month.
The Copilot is built into Windows 11 and key Microsoft programs, including Outlook, Word and PowerPoint. The move is set to place powerful AI tools in the hands of more people, thrusting the technology deeper into the mainstream.
“Those who have been using (Copilot) as part of our early access program … say that they saw average (time) savings of 30 minutes a day when they used Copilot and the majority of those who went through the program – about 77 per cent – said they don’t want to go back,” Ms Kumar said.
She said programmers, creators and researchers were so far among the biggest users of Copilot, which has already generated more than 5 billion chats since its launch last year.
The platform can summarise complex documents and email chains, as well as draft content such as letters, research and presentations via simple verbal prompts. Given it draws on web search, the need for verification is crucial.
Microsoft’s rival AI platform, Google Bard AI assistant, last year attracted criticism when backed the Indigenous Voice to parliament as a “positive step”, praised Anthony Albanese as a “man of the people”, and labelled Peter Dutton and Scott Morrison as “controversial”, sparking concerns over political bias and “propaganda” from Big Tech.
The chatbot also praises former Greens turned independent senator Lidia Thorpe for being a “strong and outspoken advocate” as well as “a role model for all Australians”.
US senator Richard Blumenthal further highlighted such concerns at a congressional hearing last May where OpenAI chief executive Sam Altman testified, using a demonstration of his AI-generated voice reading a statement to highlight the risks.
“What if it had provided an endorsement of Ukraine surrendering or Vladimir Putin’s leadership?” Senator Blumenthal said.
This week, Open AI said: “We expect and aim for people to use our tools safely and responsibly, and elections are no different”.
“We work to anticipate and prevent relevant abuse — such as misleading “deepfakes”, scaled influence operations, or chatbots impersonating candidates. Prior to releasing new systems, we red team (offensive security check) them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm.
“For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests. These tools provide a strong foundation for our work around election integrity.”