NewsBite

Finance worker at global firm conned into transferring $38 million after deepfake scammers impersonate his boss

Scammers have hooked a big fish this week, tricking a powerful individual into transferring millions in a heartbeat with a sneaky trick that could fool anyone.

‘Troubling world’: New deepfakes renew concerns around AI

A finance employee at a global company has been deceived into transferring $38.8 million to scammers who used advanced deepfake technology to stage a fake meeting with his boss.

Scammers were able to use cutting-edge technology to impersonate the Hong Kong’s firm’s chief financial officer during a video call.

The finance worker was successfully hoodwinked, and transferred the eight-figure sum straight into their pockets.

This complex fraud saw the employee lured into a fake video conference under the impression of meeting with various colleagues.

However, the figures he interacted with were all artificial creations generated by deepfake technology, according to authorities in Hong Kong.

“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” Senior Superintendent Baron Chan Shun-ching explained.

The fraudulent act involving the counterfeit CFO came to light only after the finance worker verified the transaction with the company’s main office.

The incident is among the latest in a series of frauds where criminals have exploited deepfake technology to manipulate existing video and other media for financial scams.

During the same press briefing, Hong Kong police officers revealed that they had arrested six individuals in relation to similar fraud schemes.

Chan further mentioned that between July and September of the previous year, fraudsters had used eight stolen Hong Kong identity cards, reported as lost, for making 90 loan applications and registering 54 bank accounts.

Deepfake technology was employed on at least 20 occasions to bypass facial recognition security measures by mimicking the identities of the cardholders.

Critics of the highly advanced tech have long predicted the potentially catastrophic repercussions surrounding ultra-realistic AI image creation and their use in scams.

Critics of the highly advanced tech have long predicted the potentially catastrophic repercussions surrounding ultra-realistic AI image creation and their use in scams.
Critics of the highly advanced tech have long predicted the potentially catastrophic repercussions surrounding ultra-realistic AI image creation and their use in scams.

While deepfakes have potential for legitimate applications in entertainment, education, and content creation, they have also given rise to more creative forms of scams and malicious activities.

To create a deepfake, substantial amounts of data (images or video footage) of the target person are collected. The more data available, the more convincing the deepfake can be.

With the explosion of social media over the past 20 years, coupled with the fact more smartphone owners now using their face to unlock their device, sophisticated scammers and hackers have an ever-growing smorgasbord of information at their disposal.

High profile individuals have regularly been morphed into fake videos to promote scams on social media.

Recently, the likeness of Aussie billionaire Dr Andrew Forrest was used in a deep fake crypto video scam.

The businessman and mining magnate, nicknamed Twiggy, had his identity used to spruik a get-rich-quick scheme with the ad circulating on Instagram.

The manipulated video, that surfaced late last month on the Meta-owned platform, shows ‘Dr Forrest’ urging users to sign up for a fraudulent platform that promises to make “ordinary people” thousands of dollars daily.

It then takes victims to a website called “Quantum AI,” which has become a synonymous name associated with scams and financial fraud, according to Cybertrace — the intelligence-led cyber investigations company that identified the scam video.

To create a deepfake, substantial amounts of data (images or video footage) of the target person are collected. The more data available, the more convincing the deepfake can be.
To create a deepfake, substantial amounts of data (images or video footage) of the target person are collected. The more data available, the more convincing the deepfake can be.

The clip was carefully edited from a Rhodes Trust “fireside chat”, changing Dr Forrest’s appearance and behaviour to make him look like he’s promoting software for trading cryptocurrencies.

There are also heavy risks of disinformation as deepfakes become more and more common online.

US analysts have already raised alarm about audio deepfakes leading into the 2024 US election, which has been tipped to be one of the most fiery in recent memory.

A robocall featuring a fake US President Joe Biden has raised particular alarm about audio deepfakes.

The robocall urged New Hampshire residents not to cast ballots in the Democratic primary last month, prompting state authorities to launch a probe into possible voter suppression.US regulators have been considering making AI-generated robocalls illegal, with the fake Biden call giving the effort new impetus.

There are also heavy risks of disinformation as deepfakes become more and more common online.
There are also heavy risks of disinformation as deepfakes become more and more common online.

“The political deepfake moment is here,” said Robert Weissman, president of the advocacy group Public Citizen.

“Policymakers must rush to put in place protections or we’re facing electoral chaos. The New Hampshire deepfake is a reminder of the many ways that deepfakes can sow confusion.”

Researchers fret the impact of AI tools that create videos and text so seemingly real that voters could struggle to decipher truth from fiction, undermining trust in the electoral process.

But audio deepfakes used to impersonate or smear celebrities and politicians around the world have sparked the most concern.

“Of all the surfaces — video, image, audio — that AI can be used for voter suppression, audio is the biggest vulnerability,” Tim Harper, a senior policy analyst at the Center for Democracy & Technology, told AFP.

“It is easy to clone a voice using AI, and it is difficult to identify.”

Original URL: https://www.news.com.au/technology/online/security/finance-worker-at-global-firm-conned-into-transferring-38-million-after-deepfake-scammers-impersonate-his-boss/news-story/142b5a15148094517394471bdd9b642d