- Analysis
- Technology
- AI
This was published 1 year ago
AI fakes and Twitter’s lack of control are a dangerous combination
By Tim Biggs
Twitter’s widely criticised reinvention of its verification program, combined with a new generation of synthetic media powered by generative AI, could create a perfect storm for misinformation on the platform as it tries to woo back advertisers and the 2024 US presidential race ramps up.
Last week, a faked image of an explosion outside the Pentagon briefly went viral on the service. Despite Twitter’s rules against impersonation, at least one of the profiles posting the image was made to look like an official Bloomberg account, complete with a blue verification tick.
Previously these ticks were granted by Twitter to verified individuals, organisations and celebrities at risk of impersonation, but under chief executive Elon Musk any user can buy a tick as part of a subscription service known as Twitter Blue.
Several other accounts posting the image were dedicated misinformation accounts designed to look like legitimate news outlets – all with blue ticks – which allowed the image to be seen hundreds of thousands of times with captions like “BREAKING: Explosion near Pentagon”. The image was also presented as genuine by news outlets controlled by the Russian government.
US officials confirmed that no such explosion had taken place, but not before at least one influential account had posted the image, appearing to lead to a temporary but significant impact on the US stock exchange.
Experts have speculated that the image was made with generative AI, though the exact source has not been pinned down. Regardless, Swinburne University senior media lecturer Dr Belinda Barnet said it was the lack of proper protections on Twitter that allowed it to go viral.
“The kind of system that we had in place had its flaws, it wasn’t perfect. But it was a system for identifying firstly that people are who they say they are. And secondly, giving a bit of precedence to information that’s more likely to have been fact-checked,” she said.
“But when [Musk] kind of started selling verification on the back of cereal boxes, it lost its efficacy. The verification now is you have a credit card.”
The new system also means that media shared by accounts with a blue tick is ranked higher in Twitter’s algorithms, meaning that entire networks of misinformation spreaders can get synthetic media like this into more people’s feeds by virtue of their paying $13 a month for Twitter Blue.
“I think that this particular image was algorithmically boosted by accounts that had bought verification. In essence it’s easier now, much easier, to have misinformation go viral on Twitter,” Barnet said.
“It’s somewhat ironic that [Musk] is one of the people out there saying that we should be worried about AI, but the system on the platform he bought that was there to deal with misinformation is one of the first things he dismantled.”
Musk and Twitter have been on a campaign to bring advertisers back to the platform after the verification changes and other new initiatives prompted an exodus over the past six months. Organisations can now buy special gold ticks to prove their legitimacy, as well as custom logos to indicate that accounts belong to a certain organisation. Musk has also appointed proven advertising boss Linda Yaccarino as Twitter’s next chief.
At the same time, Musk has also begun positioning Twitter as a town square for discussion of the 2024 US election, inviting Florida Governor Ron DeSantis to announce his candidacy via Twitter Spaces last week, and announcing that all candidates are very welcome on Twitter.
As of now, there are still verified users on Twitter claiming the 2020 US election was fraudulently stolen by the Biden campaign, despite Musk promising earlier this month these claims would be eliminated.
Twitter was contacted for comment, and replied with a poop emoji.
Barnet said that even though synthetic media such as the faked pentagon photo was nothing new, the rise of generative AI tools that could create content from text prompts meant it could be created more quickly and be more specific.
“To generate misinformation in the past, you needed to know how to use Photoshop. Or if you were generating written text, you’d want to sound as much like a journalist as you possibly could,” she said. “And this is all available now at the click of a button.”
Companies including OpenAI and Microsoft include protections in their products that are designed to prevent dangerous media being created. This includes watermarking images, or refusing to comply with requests including certain words or themes. Additionally, social media platforms including Twitter have the capacity to recognise and label fake images, although in the case of the Pentagon picture, Twitter didn’t do so until hours after the fact.
But none of these measures are foolproof, and Barnet said it would likely fall to users to think before they engaged.
“I don’t know if any one thing can stop it. In the meantime, I’ve been telling people, particularly since verification was watered down on Twitter, that they’re going to have to manually look for the markers of authenticity. The markers that indicate it’s probably been fact-checked.”
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.