Deepfake AI Threat Can Go Far Beyond Financial Losses

ftc scams

Mailbox alert, email and spam virus alert, Internet mail alert, security protection, spam and data leakage.

Most financial institutions haven’t invested in identity verification programs that root out deepfake AI fraud. Though fraudsters could use the tech to steal or extort substantial sums, they could also use deepfakes to tarnish an institution’s hard-won reputation.

Kevin Libby, Fraud and Security Analyst at Javelin Strategy & Research, studied deepfake AI fraud trends in his report, Unmasking the Threat of AI: Deepfakes and Financial Security. He examined how fraudsters exploit AI and recommended ways businesses can protect themselves from the emerging threat.

A Digital Mask

Artificial intelligence has improved so rapidly that discerning a computerized voice from the real thing isn’t easy anymore. The new technology has accelerated the advent of deepfakes, which are forgeries of an aspect of a person’s persona created using AI.

In voice cloning, AI programs analyze conversations and develop novel scripts that replicate vocal intonations and inflections, and sometimes even word choice. Fraudsters have used deepfake audios in phishing applications where they impersonated company executives using cloned voices.

Another type of deepfake utilizes facial mapping or face cloning. Criminals use AI to extract samples from images and videos of the target. They might also use AI to scrape pictures and videos off social media accounts like Facebook or Instagram. AI programs can synthesize that data and create a digital mask that can be mapped onto someone else’s face.

“The technology is still developing, so it’s not a wide-scale problem yet,” Libby said. “The programs that can produce convincing deepfakes aren’t highly accessible and they require substantial computing power. However, as AI gets more efficient, the demands on computational systems are going to decrease and deepfakes will be cheaper, faster, and widely available.”

A Flood of Fraud

A recent survey found that 68% of financial institutions are vulnerable to deepfake fraud. More unsettling is that 53% of banks and credit unions not only don’t have a solution, but they also don’t have plans to implement one. As deepfakes proliferate, it could leave unprotected institutions in a difficult place.

“If they don’t have systems in place before we cross that threshold, there’s going to be a flood of fraud,” Libby said. “It’s going to be the kind of fraud that drains bank accounts and causes serious reputational problems for banks and credit unions. Financial institutions can’t wait until we’ve reached the threshold to invest in technologies to protect themselves.”

Even though deepfake quality is still developing, criminals aren’t waiting for the tech to be perfected. They are already using it to conduct scams, and not just against individuals. Fraudsters have scammed businesses, in some cases up to $25 to $35 million in a single instance.

Another disquieting aspect of deepfake fraud is the number of ways fraudsters can employ it. Criminals have used the tech in phishing, extortion, and manipulation applications through phone, video, and email avenues. Once an institution transfers funds to a fraudulent account, it’s immediately moved out and nearly impossible to track.

Reputation Control

Though the financial aspects of deepfake fraud are rightfully concerning, the more pressing threat for banks and credit unions might be to their reputation. It’s estimated that 67% of financial institutions that purchase fraud identity verification tools are most concerned about protecting their brand.

Fraudsters could use facial mapping to impersonate an executive and create videos that are deceptive, inappropriate, or offensive. Criminals could use deepfakes to give misleading investment advice or report fraudulent financial information about the company to affect stock prices.

Though the fraudsters could enrich themselves, the greater risk for financial institutions is acute damage to its reputation. After the incident, it could be hard for customers to trust the company, or the impersonated individual, for some time.

“To control their reputations, risk departments should do their own research and consume threat intelligence from a number of sources,” Libby said. “They should constantly monitor posts pertaining to their organization, including videos about their CEOs and their employees. As soon as something drops, they can vet it and respond. The longer it stays out there, the more damage it can do.”

Investing in Protection

The digital banking environment means fraud identification and verification must occur solely through electronic channels. Even though budgets are often tight, financial institutions must invest in technology solutions that identify and guard against deepfake fraud.

Internal protocols should incorporate a multi-layered process on significant transactions. For example, the approval process for transferring funds or sharing sensitive data should require more than a phone call from an executive. More secure protocols might include approval codes or device proofing.

Education is just as important. If employees are knowledgeable about fraudsters’ tactics, they will be vigilant for signs of fraud in email and phone conversations. Cybersecurity departments should conduct interactive annual risk trainings that specifically detail deepfake scams, so employees understand how difficult they are to identify.

“It might require a sizeable investment in technology and training,” Libby said. “However, the risk of financial losses and reputational damage from deepfake scams means the benefits far outweigh the investment.”

Exit mobile version