NewsBite

Deepfake video of Jacinda Ardern smoking crack highlights sinister technology

The deepfake clip of Jacinda Ardern was widely shared. She was the latest target of a disturbingly advanced type of technology.

The real threat of Deep Fakes

When a video purporting to show New Zealand Prime Minister Jacinda Ardern smoking drugs surfaced on social media in recent months, experts quickly dismissed it as a fake.

The video, which was viewed and shared thousands of times, showed a woman smoking from what appeared to be a crack pipe.

The PM’s face had been superimposed using artificial intelligence. But the video, created for YouTube, was convincing enough to the many who shared it.

It was the latest example of how disturbingly authentic-looking videos can blur the lines between reality and fantasy.

Experts say videos like it, created with the use of deep fake technology, are becoming so sophisticated that soon it will be “almost impossible to detect a fake picture or video with the naked eye”.

They say its uses will expand to identity theft, sexual exploitation, reputational damage and harassment, “military deception” and “the erosion of trust in institutions and fair election processes”.

A deepfake video purporting to show Jacinda Ardern did the rounds on social media in recent months.
A deepfake video purporting to show Jacinda Ardern did the rounds on social media in recent months.

Eroding trust in democratic institutions

The Australian Strategic Policy Institute’s latest report on the technology, titled Weaponised deep fakes, takes a deep dive into where the problem is heading.

“Deep fakes will pose the most risk when combined with other technologies and social trends: they’ll enhance cyberattacks, accelerate the spread of propaganda and disinformation online and exacerbate declining trust in democratic institutions,” the report reads.

The authors say the “Russian model” of disinformation — the sharing of large amounts of propaganda — will benefit most in coming years.

“Online propaganda is already a significant problem, especially for democracies, but deep fakes will lower the costs of engaging in information warfare at scale and broaden the range of actors able to engage in it,” the report reads.

“Today, propaganda is largely generated by humans, such as China’s ‘50-centres’ and Russian ‘troll farm’ operators. However, improvements in deep fake technology, especially text-generation tools, could help take humans ‘out of the loop’.

“The key reason for this isn’t that deep fakes are more authentic than human-generated content, but rather that they can produce ‘good enough’ content faster, and more economically, than current models for information warfare.

“Deep fake technology will be a particular value-add to the so-called Russian model of propaganda, which emphasises volume and rapidity of disinformation over plausibility and consistency in order to overwhelm, disorient and divide a target.”

A deepfake shows the transition from actor Miles Fisher to Tom Cruise.
A deepfake shows the transition from actor Miles Fisher to Tom Cruise.

‘Almost impossible to detect with the naked eye’

Australia’s eSafety Commissioner, dedicated to keeping Australians safe online, notes in its position statement on deep fakes that “advances in artificial intelligence and machine learning have taken the technology even further, allowing it to rapidly generate content that is extremely realistic, almost impossible to detect with the naked eye and difficult to debunk”.

“Indeed, the field is evolving so rapidly that deepfake content can be generated without the need for any human supervision at all, using what is called recycled generative adversarial networks,” the statement reads.

“Deep fakes have the potential to cause significant harm. To date, they have been used to create fake news, false pornographic videos and malicious hoaxes, usually targeting well-known people such as politicians and celebrities. Potentially, deepfakes can be used as a tool for identity theft, extortion, sexual exploitation, reputational damage, ridicule, intimidation and harassment”.

The position statement noted that deep fakes include the “heightened potential for fraud, propaganda and disinformation, military deception and the erosion of trust in institutions and fair election processes”.

A deepfake video that has been manipulated with artificial intelligence to potentially deceive viewers. Picture: Alexandra Robinson/AFP/Getty Images
A deepfake video that has been manipulated with artificial intelligence to potentially deceive viewers. Picture: Alexandra Robinson/AFP/Getty Images

Real world uses that made an impact

Perhaps the most widely-shared deep fake video comes to us from the United States, where House Speaker Nancy Pelosi was the victim of a sinister attempt to undermine her credibility.

The video showed Ms Pelosi speaking on camera, but the video and audio had been manipulated to make her appear under the influence of alcohol.

The video was shared thousands of times and viewed millions of times.

Donald Trump’s personal lawyer at the time, Rudy Giuliani, tweeted a link to the video, giving it further credibility.

“What is wrong with Nancy Pelosi?” he wrote. “Her speech pattern is bizarre.”

Facebook founder Mark Zuckerberg was forced to take action after a deep fake video purported to show him mocking internet users.

The video, posted to Instagram (which Facebook owns), falsely portrayed the billionaire as saying: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

The video remains online today.

Experts say that as deep fake videos become more sophisticated, many leave behind telltale signs of doctoring.

These signs include blurring or pixilation, particularly around the mouth and eyes, badly synched sound, glitches, changes in lighting, gaps in storyline and irregular blinking.

Read related topics:Jacinda Ardern

Original URL: https://www.news.com.au/technology/online/deepfake-video-of-jacinda-ardern-smoking-crack-highlights-sinister-technology/news-story/83de4142fd371d83652d51256564c803