Celebrities, politicians, and banks face a deepfake dilemma

We’re reaching the “so easy, a caveman can do it” stage of the deepfake epidemic. Fraudsters don’t need a computer science degree to create and deploy armies of fake humans, nor will it drain their checking account (quite the opposite).

As if deepfake technology wasn’t accessible enough, the recent unveiling of OpenAI’s Sora product only simplifies—and complicates—matters. Sora, which for now is only available to certain users, produces photorealistic video scenes from text prompts. Not to be outdone, Alibaba demonstrated their EMO product making the Sora character sing. The lifelike videos created by such deepfake platforms fool even the ritziest of liveness detection solutions.

AI-powered fraud isn’t flying under the radar anymore—the prospect of taxpayers losing upwards of one trillion dollars will do that. One burgeoning scam, known as pig butchering, was featured on an episode of John Oliver. These scams start as a wrong number text message and, over the course of weeks or months, lure recipients into bogus crypto investments. Conversational generative AI tools like ChatGPT, combined with clever social engineering, make pig butchering a persuasive and scalable threat. Accompanying these texts with realistic deepfaked images only bolsters the perceived authenticity.

Companies are taking notice, too. So is the Biden administration, though its executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” in late 2023 didn’t sufficiently address synthetic fraud—specifically cases involving Generative AI and deepfakes.

The damage caused by AI-generated, deepfaked identities continues to worsen. Here is how it has permeated seemingly every facet of our lives, and how banks can stay one step ahead.

Hacking the vote

The 2024 presidential election is shaping up to be quite the spectacle, one that will capture the eyes of the world and, in all likelihood, further sever an already divided populace. Citizens exercising their right to vote is crucial, but the advancement of deepfake technology raises another concern: are voters properly informed?

Election-hacking-as-a-service sounds like the work of dystopian fiction, but it’s just the latest threat politicians and their constituents need to worry about. Highly sophisticated factions—in the US and abroad—are leveraging generative AI and deepfakes to weaponize disinformation and flip elections like flapjacks.

Some election meddlers have changed the outcome of 30+ elections. Remember the deepfaked Biden robocall ahead of the New Hampshire primary? That’s the handiwork of an election hacking superteam. A personalized text message or email might not be from [insert candidate here]. A video portraying an indecent remark could be fabricated. Some voters may say they’re “leaning” towards voting yay or nay on Measure Y or Prop Z, when in actuality they’re being pushed in either direction by synthetic election swingers.

In February, a slew of tech behemoths signed an accord to fight back against AI-generated election hacking. Like Biden’s executive order, the accord is a step in the right direction; time will tell if it pays dividends.

The case of the deepfaked CFO

Deepfaked audio and video is convincing enough to sway voters. It can also dupe multinational financial firms out of $25 million—overnight.

Just ask the Hong Kong finance worker who unknowingly wired about $25.6 million to fraudsters after attending a video conference call with who he thought were his fellow colleagues. A synthetic identity posing as the company’s CFO authorized the transactions—15 total deposits into five accounts—which the worker discovered were fraudulent after checking in with his corporate office.

It appears the bad actors used footage of past video conferences to create the deepfaked identities. Data from WhatsApp and emails helped make the identities look more legitimate, which shows the lengths these deepfaking fraudsters are willing to go.

A couple of years ago, fraudsters would have perpetrated this attack in a simpler fashion, via phishing, for example. But with the promise of bigger paydays, and much less effort and technical knowhow required thanks to the ongoing AI explosion, cyber thieves have every incentive to deepfake companies all the way to the bank.

The Taylor Swift incident

Celebrities, too, are getting a taste of just how destructive deepfakes can be.

Perhaps the most notable (and widely covered) celebrity deepfake incident happened in January when sexually explicit, AI-generated pictures of Taylor Swift popped up on social media. Admins on X/Twitter, where the deepfaked images spread like wildfire, eventually blocked searches for the images but not before they garnered nearly 50 million views.

Pornongraphic celebrity deepfakes aren’t a new phenomenon. As early as 2017, Reddit users were superimposing the faces of popular actresses—such as Scarlett Johansson and Gal Gadot—onto porn performers. But AI technology back then was nowhere near where it is today. Discerning users could spot a poorly rendered face-swap and determine a video or image was fake.

Shortly after the Taylor Swift fiasco, US senators proposed a bill that enables victims of AI-generated deepfakes to sue the videos’ creators—long overdue considering a 2019 report found that non-consensual porn comprised 96 percent of all deepfake videos.

Deepfaking the finservs

Whether it’s hacking elections, spreading pornographic celebrity deepfakes, or posing as a company’s CFO, deepfakes have never been more convincing or dangerous. And, because fraudsters want the most bang for their buck, naturally they’re inclined to attack those with the most bucks: banks, fintech companies, and other financial institutions.

The $25 million CFO deepfake speaks to just how severe these cases can be for finservs, though most deepfaking fraudsters prefer a measured approach that spans weeks or months. Such is the M.O. of  SuperSynthetic™ “sleeper” identities. This newest species of synthetic fraudster is too crafty to settle for a brute-force offensive. Instead, it leverages an aged and geo-located identity that’s intelligent enough to make occasional deposits and interact with a banking website or app for an extended period to appear like a genuine customer.

However, SuperSynthetics achieving their long-awaited goal—accept a credit card or loan offer, cash out, and scram—is contingent on one vital step: passing the onboarding process.

This is where deepfakes come in. During onboarding, SuperSynthetics can deepfake driver’s licenses and other forms of ID, even live video interviews if need be. Given the advancement in deepfake technology, and the unreliability of liveness detection, the only real chance banks have is to stop SuperSynthetic identities before they’re onboarded.

Using a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence, preemptively sniffing out SuperSynthetics is indeed possible. This is the foundation of a “top-down” approach that analyzes synthetic identities collectively—different from the one-by-one approach of the olden days. A bird’s eye view of identities uncovers signature online behaviors and patterns consistent enough to rule out a false positive. Multiple identities depositing money into their checking account every Wednesday at 9:27 p.m.? Something’s afoot.

The top-down approach is the surest and fastest way banks can ferret out synthetic identities and avoid getting deepfaked at the onboarding stage. But the clock is ticking. A study, commissioned by Deduce, found more than 75% of finservs already had synthetic identities in their databases, and 87% had extended credit to fake accounts.

Bank vs. Deepfake clearly isn’t a fair fight. But if banks do their work early, and subsequently avoid deepfakes altogether, their customers, reputations, and bottom lines will be the better for it.