NewsBite

AI is helping scammers outsmart you — and your bank

Your ‘spidery sense’ is no match for the new wave of scammers.

ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity.
ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity.

Artificial intelligence is making scammers tougher to spot.

Gone are the poorly worded messages that easily tipped off authorities as well as the grammar police. The bad guys are now better writers and more convincing conversationalists, who can hold a conversation without revealing they are a bot, say the bank and tech investigators who spend their days tracking the latest schemes.

ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity. In recent years, criminals have used AI-based software to impersonate senior executives and demand wire transfers.

“Your spidery senses are no longer going to prevent you from being victimised,” said Matt O’Neill, a former Secret Service agent and co-founder of cybersecurity firm 5OH Consulting.

In these recent cases, the frauds are often similar to old scams. But AI has enabled scammers to target much larger groups and use more personal information to convince you the scam is real.

Fraud-prevention officials say these tactics are often harder to spot because they bypass traditional indicators of scams, such as malicious links and poor wording and grammar. Criminals today are faking driver’s licenses and other identification in an attempt to open new bank accounts and adding computer-generated faces and graphics to pass identity-verification processes. All of these methods are hard to stave off, say the officials.

JPMorgan Chase has begun using large-language models to fight identity fraud. Carisma Ramsey Fields, vice president of external communications at JPMorgan Chase, said the bank has also stepped up its efforts to educate customers about scams.

And while banks stop some fraud, the last line of defence will always be you. These security officials say to never share financial or personal information unless you’re certain about who’s on the receiving end. If you do pay, use a credit card because it offers the most protection.

“Somebody who tells you to pay by crypto, cash, gold, wire transfer or a payment app is likely a scam,” said Lois Greisman, an associate director of the Federal Trade Commission.

Tailored targeting

With AI as an accomplice, fraudsters are reaping more money from victims of all ages. People reported losing a record $US10bn ($15.07bn) to scams in 2023, up from $US9bn a year prior, according to the FTC. Since the FTC estimates only 5 per cent of fraud victims report their losses, the actual number could be closer to $US200bn.

Joey Rosati, who owns a small cryptocurrency firm, never thought he could fall for a scam until a man he believed to be a police officer called him in May.

The man told Rosati he had missed jury duty. The man seemed to know all about him, including his Social Security number and that he had just moved to a new house. Rosati followed the officer’s instruction to come down to the station in Hillsborough County, Fla.— which didn’t seem like something a scammer would suggest.

On the drive over, Rosati was asked to wire $US4500 to take care of the fine before he arrived. It was then that Rosati realised it was a scam and hung up.

“I’m not uneducated, young, immature. I have my head on my shoulders,” Rosati said. “But they were perfect.”

Social-engineering attacks like the jury-duty scam have grown more sophisticated with AI. Scammers use AI tools to unearth details about targets from social media and data breaches, cybersecurity experts say. AI can help them adapt their schemes in real time by generating personalised messages that convincingly mimic trusted individuals, persuading targets to send money or divulge sensitive information.

David Wenyu’s LinkedIn profile displayed an “open to work” banner when he received an email in May offering a job opportunity. It appeared to be from SmartLight Analytics, a legitimate company, and came six months after he had lost his job.

He accepted the offer, even though he noticed the email address was slightly different from those on the company’s website. The company issued him a check to purchase work-from-home equipment from a specific website. When they told him to buy the supplies before the money showed up in his account, he knew it was a scam.

“I was just emotionally too desperate, so I ignored those red flags,” Wenyu said.

In an April survey of 600 fraud-management officials at banks and financial institutions by banking software company Biocatch, 70 per cent said the criminals were more skilled at using AI for financial crime than banks are at using it for prevention. Kimberly Sutherland, vice president of fraud and identity strategy at LexisNexis Risk Solutions, said there has been a noticeable rise in fraud attempts that appear to be AI related in 2024.

Criminals used to have to guess or steal passwords through phishing attacks or data breaches, often targeting high-value accounts one by one. Now, scammers can quickly cross-reference and test reused passwords across platforms. They can use AI systems to write code that would automate various aspects of their ploys, O’Neill said.

If scammers obtain your email and a commonly used password from a tech company data breach, AI tools can swiftly check if the same credentials unlock your bank, social media or shopping accounts.

Financial institutions are taking new steps — and tapping AI themselves — to shield your money and data.

Banks monitor how you enter credentials, whether you tend to use your left or right hand when swiping on the app, and your device’s IP address to build a profile on you. If a login attempt doesn’t match your typical behaviour, it is flagged, and you may be prompted to provide more information before proceeding.

They can tell when you’re being coerced into filling out information, because of shifts in your typing cadence. If digits are copied and pasted, if the voice verification is too perfect, or if text is too evenly spaced and grammatically correct, that is a red flag, said Jim Taylor, chief product officer at RSA Security, a firm with fraud-detection tech used by Wells Fargo, Citibank and others.

Consumers paid scammers $US1.4bn in cryptocurrency in 2023, up more than 250 per cent from 2019, according to FTC data.

As a result, security officials suggest that you turn on two-factor authentication, so you get a text or email whenever someone tries logging into one of your accounts. If anything feels off during a potential money exchange, take a beat.

Pressing pause on a potentially fraudulent situation is also important psychologically. Many scammers try to create a false urgency or confuse victims to manipulate them. If all the information about a transaction or account is coming from one person, that is a red flag. Get a second opinion from a trusted contact.

“If it’s going to hurt if you lose it, validate it,” O’Neill, the former Secret Service agent said.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/the-wall-street-journal/ai-is-helping-scammers-outsmart-you-and-your-bank/news-story/1913646471f38698712f3cd7c7de612e