Pentagon AI fake image strikes concern
The fake image showed the Pentagon on fire, causing slight market turbulence. It has AI researchers worried, saying the technology could be a “democratisation of propaganda and untruth”.
Australian artificial intelligence researchers have warned of the risks of AI after a fake image of an explosion at the Pentagon briefly went viral and caused a 10-minute dip on Wall Street overnight.
Image generation is another rapidly moving field of generative AI. Leaders in this field include Stable Diffusion from UK firm Stability AI, Midjourney from San Francisco, and Dall-E from OpenAI, the maker of ChatGPT.
The Pentagon clarified there was no explosion.
“We can confirm this was a false report and the Pentagon was not attacked today,” a spokesman said.
The Arlington, Virginia fire department also reacted, posting on social media that there was no explosion or incident taking place at or near the Pentagon.
The shared image caused the markets to be knocked for a few minutes, with the S&P 500 stumbling by 0.3 per cent compared to its Friday close before recovering.
“There was a dip likely related to this fake news as the (trading) machines picked up on it, but I would submit that the scope of the decline did not match the seemingly bad nature of the fake news,” said Pat O’Hare, of Briefing.com.
Tiberio Caetano, chief scientist at the Gradient Institute, a non-profit AI ethics research organisation, said counterfeit content was a “massive issue” of generative AI.
“Human life, it’s predicated on the idea that we believe in trust,” he said.
“Trust is really the backbone of civilisation. Without trust, we can’t do anything.
“Very soon we will be in a situation where those problems will become more and more common, and we need to, society needs to figure out ways to sound that alarm.”
He said the idea of a cryptographic watermark was being hotly researched. A watermark on content created with generative AI could insert signals, undetectable by humans, into the words and pixels created. This could, in turn, be detected by other computers.
Monash University data science and AI professor Geoff Webb said generative AI tools could see a “democratisation of propaganda and untruth”.
“It’s always been possible to generate that sort of image, but it’s required a lot of specialised knowledge and has been hard to do,” Professor Webb said.
“Now, anyone can do it – there are going to be many, many more of them.”
He said AI image generation tools were very easy to use.
“You don‘t even need to be able to download and install a program, you can just go online and do it all online.”
Dr Webb said regulation would be “extremely difficult”.
“We just don’t know what regulatory guardrails ought to be put in place,” he said.
“It’s evolving too fast, and when it’s out as open-source, there’s going to be a lot of people who will access and modify these things who aren’t going to pay any attention to legal frameworks anyway.”