The war between democracy and generative AI rages on

2024 is a big year for democracy. Half of the global population resides in countries that have an election this year, and all eyes, of course, will be on the main event: the grudge match between Joe Biden and Donald Trump.

But running parallel to this frenzied election year is the rapid evolution of artificial intelligence and its ever-growing assortment of applications. Just when we think it can’t get any smarter or more believable, AI leapfrogs our expectations. It’s exciting for users looking to post amusing memes or videos in their slack channels—not so much for politicians. Voters are frustrated, too. Is that audio clip of a candidate’s off-color remarks genuine, or deepfaked malarkey?

The deceptive tactics of generative AI-powered deepfakes and fake humans—and their potential to swing elections—isn’t just fodder for cybersecurity trades. Biden issued an executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” late last year. Several tech giants, including Google, Microsoft, and OpenAI, recently met at the Munich Security Conference and agreed to take “reasonable precautions” in stopping AI from influencing elections.

Executives from leading tech companies gather at the 2024 Munich Security Conference.

But Biden’s executive order doesn’t sufficiently address synthetic fraud, the Munich pact, according to critics, isn’t proactive enough, and fraudsters (especially those with AI at their disposal) are always a step ahead regardless of the countermeasures platforms or the government put in place. Furthermore, the tech companies tasked with hosting and moderating deepfaked content have laid off more than 40K workers. Without a new approach to neutralizing synthetic fraudsters, the fakery will continue to snowball.

Here are the ways in which generative AI is defrauding elections globally, and how a re-tooled approach may help social media and AI platforms fight back.


Video deepfakes steal most of the headlines, but AI-generated audio is more advanced and democratized (at least until hyper-realistic video offerings like OpenAI’s Sora become widely available). One could even argue that deepfaked audio is more effective in altering elections, especially after a Biden robocall tried to dissuade people from voting in the New Hampshire primary.

Context, or lack thereof, is what makes audio deepfakes tough to recognize. The voters on the other end of the line lack the visual indicators that give video deepfakes away.

This context deficit bolsters the believability of so-called “grandparent” scams as well, in which a fraudster clones the voice of someone who’s close to the victim and convinces them to wire money. Personalization brings credibility. Just as Cameo users can have celebrities record birthday wishes for a loved one, now AI applied to voice or video patterns can have a personality or politician record a custom message.

If you’re in the business of artificially swaying voter sentiment and rigging elections, simply copy the voice of a relative or friend, spew some disinformation about Candidate XYZ or Prop ZYX, and move on to the next robocall.

In February, the FCC banned robocalls that use AI-generated voices. Time will tell if this puts audio deepfakers on hold. (Don’t count on it.)

A picture’s worth a thousand votes

AI image generators are also under the microscope. The Center for Countering Digital Hate, a watchdog group, found that tools like Midjourney and ChatGPT Plus can create deceptive images capable of spreading false political information.

The study, which additionally tested DreamStudio and Microsoft’s Image Creator, was able to create fake election imagery in more than 40% of cases. Midjourney performed significantly worse, generating disinformation 65% of the time—not a huge surprise considering the company didn’t sign the Munich Security Conference pact and only employs 11 team members.

The realistic nature of these images is startling. In March, an AI-generated photo purporting to show black Trump supporters posing with the former president was deemed fraudulent, apparently in an attempt to draw black voters away from the Biden campaign. Several AI-generated and equally bogus images of Trump being arrested also proliferated across social media.

AI-generated political images are incredibly lifelike.

Since the watchdog report, leading AI generators have put guardrails in place. The most obvious move is to disallow prompts involving “Biden” or “Trump.” However, jailbreaking maneuvers can sometimes bypass such controls. For example, instead of typing a candidate’s name, bad actors can key in their defining physical characteristics along with, say, “45th president,” and produce the desired image.

Take political candidates out of the equation. There are still other visuals that can sway voters. How about a fake image of a Trump supporter smashing a ballot box open, or Biden supporters lighting Mar-a-Lago ablaze? Election tampering campaigns don’t always target a specific candidate or political party but rather a divisive issue such as freedom of choice or border control. For instance, images of migrants illegally crossing the Rio Grande or climbing a fence, fake or not, are bound to rile up one group of voters.

A global crisis

International examples of AI-based election interference could portend trouble for the US, but hopefully will inspire technologists and government officials to rethink their cybersecurity approach.

In Slovakia, a key election was tainted by AI-generated audio that mimicked a candidate’s voice saying he had tampered with the election and, worse (for some voters) planned to raise beer prices. Indonesian Gen-Z voters warmed up to a presidential candidate and previously disgraced military general thanks to a cat-loving, “chubby-cheeked” AI-generated image of him. Bad actors in India, meanwhile, are using AI to “resurrect” dead political figures who in turn express their support for those currently in office.

An AI-generated avatar of M Karunanidhi, the deceased leader of India’s DMK party.

The image of the Indonesian presidential candidate is nothing more than a harmless campaign tactic, but are the other two examples the work of election-hacking-as-a-service schemes? Troubling a term as it might be, this is our new democratic reality: hackers contracted to unleash hordes of synthetic identities across social media, spreading false, AI-generated content to influence voter sentiment however they please.

An Israeli election hacking group dubbed “Team Jorge,” which controls over 30K fake social media profiles, meddled in a whopping 33 elections, according to a Guardian report. If similar groups aren’t already threatening elections in the US, they will soon.

The road ahead

Combatting AI-powered election fraud is an uphill battle, and Midjourney CEO David Holz believes the worst is yet to come. “Anybody who’s scared about fake images in 2024 is going to have a hard 2028,” Holz warned during a recent video presentation. “It will be a very different world at that point…Obviously you’re still going to have humans running for president in 2028, but they won’t be purely human anymore.”

What is the answer to this problem, this future Holz sees in which every political candidate has a lifelike “deepfake chatbot” armed with manufactured talking points? Raising public awareness of generative AI’s role in election tampering is important but, ironically, that can also backfire. As more people learn about the complexity and prevalence of deepfaked audio, video, and images, a growing sense of skepticism can hinder their judgment. Known as the “liar’s dividend” concept in political circles, this causes jaded, deepfake-conscious voters to mislabel genuine media as fake. It doesn’t help matters when presidential candidates label mainstream media similarly while publicizing their own view of the world.

Social media and generative AI platforms have their work cut out for them. Neutralizing, much less curbing, AI-powered election fraud pits them against artificial intelligence and synthetic identities that are disturbingly lifelike and nearly undetectable. This includes SuperSynthetic™ “sleeper” identities that can hack elections just as easily as they swindle finservs.

Deepfaked synthetic identities are too smart and real-looking to face head-on. Stopping these slithery fraudsters requires an equally crafty strategy, and a sizable chunk of real-time, multicontextual, activity-backed identity intelligence. Our money is on a “top-down” approach that, prior to account creation, analyzes synthetic identities collectively rather than individually. This bird’s eye view picks up on signature online behaviors of synthetic identities, patterns that rule out coincidence.

The Deduce Identity Graph is monitoring upwards of 30 million synthetic identities in the US alone. Some of these identities will attempt to “hack the vote” come November. Some already are. A high-level approach that examines them as a group—before they can deepfake unsuspecting voters—may be democracy’s best shot.

Celebrities, politicians, and banks face a deepfake dilemma

We’re reaching the “so easy, a caveman can do it” stage of the deepfake epidemic. Fraudsters don’t need a computer science degree to create and deploy armies of fake humans, nor will it drain their checking account (quite the opposite).

As if deepfake technology wasn’t accessible enough, the recent unveiling of OpenAI’s Sora product only simplifies—and complicates—matters. Sora, which for now is only available to certain users, produces photorealistic video scenes from text prompts. Not to be outdone, Alibaba demonstrated their EMO product making the Sora character sing. The lifelike videos created by such deepfake platforms fool even the ritziest of liveness detection solutions.

AI-powered fraud isn’t flying under the radar anymore—the prospect of taxpayers losing upwards of one trillion dollars will do that. One burgeoning scam, known as pig butchering, was featured on an episode of John Oliver. These scams start as a wrong number text message and, over the course of weeks or months, lure recipients into bogus crypto investments. Conversational generative AI tools like ChatGPT, combined with clever social engineering, make pig butchering a persuasive and scalable threat. Accompanying these texts with realistic deepfaked images only bolsters the perceived authenticity.

Companies are taking notice, too. So is the Biden administration, though its executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” in late 2023 didn’t sufficiently address synthetic fraud—specifically cases involving Generative AI and deepfakes.

The damage caused by AI-generated, deepfaked identities continues to worsen. Here is how it has permeated seemingly every facet of our lives, and how banks can stay one step ahead.

Hacking the vote

The 2024 presidential election is shaping up to be quite the spectacle, one that will capture the eyes of the world and, in all likelihood, further sever an already divided populace. Citizens exercising their right to vote is crucial, but the advancement of deepfake technology raises another concern: are voters properly informed?

Election-hacking-as-a-service sounds like the work of dystopian fiction, but it’s just the latest threat politicians and their constituents need to worry about. Highly sophisticated factions—in the US and abroad—are leveraging generative AI and deepfakes to weaponize disinformation and flip elections like flapjacks.

Some election meddlers have changed the outcome of 30+ elections. Remember the deepfaked Biden robocall ahead of the New Hampshire primary? That’s the handiwork of an election hacking superteam. A personalized text message or email might not be from [insert candidate here]. A video portraying an indecent remark could be fabricated. Some voters may say they’re “leaning” towards voting yay or nay on Measure Y or Prop Z, when in actuality they’re being pushed in either direction by synthetic election swingers.

In February, a slew of tech behemoths signed an accord to fight back against AI-generated election hacking. Like Biden’s executive order, the accord is a step in the right direction; time will tell if it pays dividends.

The case of the deepfaked CFO

Deepfaked audio and video is convincing enough to sway voters. It can also dupe multinational financial firms out of $25 million—overnight.

Just ask the Hong Kong finance worker who unknowingly wired about $25.6 million to fraudsters after attending a video conference call with who he thought were his fellow colleagues. A synthetic identity posing as the company’s CFO authorized the transactions—15 total deposits into five accounts—which the worker discovered were fraudulent after checking in with his corporate office.

It appears the bad actors used footage of past video conferences to create the deepfaked identities. Data from WhatsApp and emails helped make the identities look more legitimate, which shows the lengths these deepfaking fraudsters are willing to go.

A couple of years ago, fraudsters would have perpetrated this attack in a simpler fashion, via phishing, for example. But with the promise of bigger paydays, and much less effort and technical knowhow required thanks to the ongoing AI explosion, cyber thieves have every incentive to deepfake companies all the way to the bank.

The Taylor Swift incident

Celebrities, too, are getting a taste of just how destructive deepfakes can be.

Perhaps the most notable (and widely covered) celebrity deepfake incident happened in January when sexually explicit, AI-generated pictures of Taylor Swift popped up on social media. Admins on X/Twitter, where the deepfaked images spread like wildfire, eventually blocked searches for the images but not before they garnered nearly 50 million views.

Pornongraphic celebrity deepfakes aren’t a new phenomenon. As early as 2017, Reddit users were superimposing the faces of popular actresses—such as Scarlett Johansson and Gal Gadot—onto porn performers. But AI technology back then was nowhere near where it is today. Discerning users could spot a poorly rendered face-swap and determine a video or image was fake.

Shortly after the Taylor Swift fiasco, US senators proposed a bill that enables victims of AI-generated deepfakes to sue the videos’ creators—long overdue considering a 2019 report found that non-consensual porn comprised 96 percent of all deepfake videos.

Deepfaking the finservs

Whether it’s hacking elections, spreading pornographic celebrity deepfakes, or posing as a company’s CFO, deepfakes have never been more convincing or dangerous. And, because fraudsters want the most bang for their buck, naturally they’re inclined to attack those with the most bucks: banks, fintech companies, and other financial institutions.

The $25 million CFO deepfake speaks to just how severe these cases can be for finservs, though most deepfaking fraudsters prefer a measured approach that spans weeks or months. Such is the M.O. of  SuperSynthetic™ “sleeper” identities. This newest species of synthetic fraudster is too crafty to settle for a brute-force offensive. Instead, it leverages an aged and geo-located identity that’s intelligent enough to make occasional deposits and interact with a banking website or app for an extended period to appear like a genuine customer.

However, SuperSynthetics achieving their long-awaited goal—accept a credit card or loan offer, cash out, and scram—is contingent on one vital step: passing the onboarding process.

This is where deepfakes come in. During onboarding, SuperSynthetics can deepfake driver’s licenses and other forms of ID, even live video interviews if need be. Given the advancement in deepfake technology, and the unreliability of liveness detection, the only real chance banks have is to stop SuperSynthetic identities before they’re onboarded.

Using a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence, preemptively sniffing out SuperSynthetics is indeed possible. This is the foundation of a “top-down” approach that analyzes synthetic identities collectively—different from the one-by-one approach of the olden days. A bird’s eye view of identities uncovers signature online behaviors and patterns consistent enough to rule out a false positive. Multiple identities depositing money into their checking account every Wednesday at 9:27 p.m.? Something’s afoot.

The top-down approach is the surest and fastest way banks can ferret out synthetic identities and avoid getting deepfaked at the onboarding stage. But the clock is ticking. A study, commissioned by Deduce, found more than 75% of finservs already had synthetic identities in their databases, and 87% had extended credit to fake accounts.

Bank vs. Deepfake clearly isn’t a fair fight. But if banks do their work early, and subsequently avoid deepfakes altogether, their customers, reputations, and bottom lines will be the better for it.

Get ahead, or get left behind

New technology gets the people going. Just ask the folks coughing up a fair sum of cash for an Apple Vision Pro. Sure, these users may look like Splinter Cell operatives with their VR goggles on but, most likely, Apple’s foray into “spatial computing” will take off sooner rather than later.

However, before everyday users and even large enterprises can adopt new technologies, another category of users is way ahead of them: fraudsters. These proactive miscreants adopt the latest tech and find new ways to victimize companies and their customers. Think metaverse and crypto fraud or, most recently, the use of generative AI to create legions of humanlike bots.

Look back through the decades and a clear pattern emerges: new tech = new threat. Phishing, for example, was the offspring of instant messaging and email in the mid-1990s. Even the “advance fee” or “Nigerian Prince” scam we associate with our spam folders originally cropped up in the 1920s due to breakthroughs in physical mail.

What can we learn from studying this troubling pattern? How can businesses adopt the latest wave of nascent technologies while protecting themselves from opportunistic fraudsters? In answering these questions, it’s helpful to examine the major technological advancements of the past 20+ years—and how bad actors capitalized at every step along the way.

The 2000s

The 2000s ushered in digital identities and, by extension, digital identity fraud.

Web 1.0 and the internet had exploded by the early aughts. PCs, e-commerce, and online banking increased the personal data available on the web. As more banks transitioned to online, and digital-only banks emerged, fintech companies like PayPal hit the ground running and online transactions skyrocketed. Fraudsters pounced on the opportunity. Phishing, Trojan horse viruses, credential stuffing, and exploiting weak passwords were among the many tricks that fooled users and led to breaches at notable companies and financial institutions.

An example of a Nigerian Prince or “419” email scam

Phishing scams, in which bogus yet legitimate-looking emails persuade users to click a link and input personal info, took off in the 2000s and are even more effective today. Thanks to AI, including A-based tools like ChatGPT, phishing emails are remarkably sophisticated, targeted, and scalable.

Social media entered the frame in the 2000s, too, which opened a Pandora’s box of online fraud schemes that still persist today. The use of fake profiles provided another avenue for phishing and social engineering that would only widen with the advent of smartphones.

The 2010s

The 2010s were all about the cloud. Companies went gaga over low-cost computing and storage solutions, only to go bonkers (or broke) due to the corresponding rise in bot threats.

By the start of the decade, Google, Microsoft, and AWS were all-in on the cloud. The latter brought serverless computing to the forefront at the 2014 re:Invent conference, and the two other big-tech powerhouses followed suit. Then came the container-sance, the release of Docker and Kubernetes, the mass adoption of DevOps and hybrid and multicloud and so on. But, in addition to their improved portability and faster deployment, containers afforded bad actors (and their bots) another attack surface.

AWS unveils Lambda (and serverless computing) at re:Invent 2014

The rise of containers, cloud-native services, and other cloudy tech in the 2010s led to a boom in innovation, efficiency, and affordability for enterprises—and for fraudsters. Notably, the Mirai botnet tormented global cloud services companies using unprecedented DDoS (distributed denial of service) attacks, and the 3ve botnet accrued $30 million in click-fraud over a five-year span.

Malicious bots had never been cheaper or more scalable, brute force and credential stuffing attacks more seamless and profitable. The next tech breakthrough would catapult bots to another level of deception.

The 2020s

AI has blossomed in the 2020s, especially over the past year, and once again fraudsters have flipped the latest technological craze into a cash cow.

Amid the ongoing AI explosion, bad actors have specifically leveraged Generative AI and self-learning identity personalization to line their pockets. It’s hard to say what’s scarier—how human these bots appear, or how easy it is for novice users to create them. The widespread availability of data and AI’s capacity to teach itself using LLMs (large language models) has spawned humanlike identities at massive scale. Less technical fraudsters can easily build and deploy these identities thanks to tools like WormGPT, otherwise known as “ChatGPT’s malicious cousin.”

SuperSynthetic identities represent the next step in bot evolution

The most nefarious offshoot of AI’s golden age may be SuperSynthetic™ identities. The most humanlike of the synthetic fraud family tree, SuperSynthetics are all about the long con and don’t mind waiting several months to cash out. These identities, which can deepfake their way past account verification if need be, are realistically aged and geo-located with a legit credit history to boot, and they’ll patiently perform the online banking actions of a typical human to build trust and credit worthiness. Once that loan is offered, the SuperSynthetic lands its long-awaited reward. Then it’s on to the next bank.

Like Web 1.0 and cloud computing before it, AI’s superpowers have amplified the capabilities of both companies and the fraudsters who threaten their users, bottom lines and, in some cases, their very existence. This time around, however, the threat is smarter, more lifelike, and much harder to stop.

What now?

There’s undoubtedly a positive correlation between the emergence of technological trends and the growth of digital identity fraud. If a new technology hits the scene, fraudsters will exploit it before companies know what hit them.

Rather than getting ahead of the latest threats, many businesses are employing outdated mitigation strategies that woefully overlook the SuperSynthetic and stolen identities harming their pocketbooks, users, and reputations. Traditional fraud prevention tools scrutinize identities individually, prioritizing static data such as device, email, IP address, SSN, and other PII data. The real solution is to analyze identities collectively, and track dynamic activity data over time. This top-down strategy, with a sizable source of real-time, multicontextual identity intelligence behind it, is the best defense against digital identity fraud’s most recent evolutionary phase.

It’s not that preexisting tools in security stacks aren’t needed; it’s that these tools need help. At last count, the Deduce Identity Graph is tracking nearly 28 million synthetic identities in the US alone, including nearly 830K SuperSynthetic identities (a 10% increase from Q3 2023). If incumbent antifraud systems aren’t fortified, and companies continue to look at identities on a one-to-one basis, AI-generated bots will keep slipping through the cracks.

New threats require new thinking. Twenty years ago phishing scams topped the fraudulent food chain. In 2024 AI-generated bots rule the roost. The ultimatum for businesses remains the same: get ahead, or get left behind.

Synthetic customers are there, even if you don’t see them

There’s no denying that customer data platforms (CDPs) are a must-have tool for today’s companies. Consolidating customer data into one location is much more manageable. Aside from data privacy considerations—particularly in finance and healthcare—a CDP’s organized, streamlined view of customer data activates personalized user experiences and offers for existing customers while accurately identifying prospective customers who are most likely to drive revenue.

But synthetic fraud, which now accounts for 85% of all identity fraud, is infesting the tidiest and most closely monitored of CDPs. Most CDPs scan for telltale signs of fraud in real-time; however, synthetic fraudsters are too smart for that. The ubiquity of AI, and its ever-growing intelligence, enables bad actors to create and manipulate synthetic identities that appear more human than ever. The signs of fraud aren’t so obvious anymore, and the cybersecurity tools used by many companies aren’t up to snuff.

Effectively stomping out synthetic identity fraud requires an obsessive degree of CDP hygiene. This, of course, isn’t possible without a thorough understanding of what synthetic identities are capable of, how they operate, and the strategy companies must adopt to neutralize them.

Silent killers

No intelligence agency wants to readily admit it’s been infiltrated by a spy, and no CEO is exactly chomping at the bit to admit their company’s customer database is crawling with fake customers. When PayPal’s then-CEO, Dan Schulman, admitted to over 4 million fake customers it cost the fintech company over 25% in market capitalization. But these fraudsters are indeed there, camped out in CDPs and operating like legitimate customers—deposits, withdrawals, credit services, the whole nine.

A recent Wakefield report surveyed 500 senior fraud and risk professionals from the US. More than 75% of these executives said they had synthetic customers. Half of respondents deemed their company’s synthetic fraud prevention efforts somewhat effective, at best.

Perhaps most troubling? 87% of these companies admitted to extending credit to synthetic customers, and 53% of the time credit was extended proactively, via a marketing campaign, to the fraudster. These fraudsters aren’t just incredibly human-like and patient—they’re in it for the big haul. And according to the FTC’s 2022 report on identify fraud, the per-incident financial impact is in excess of $15K. 

Synthetic Sleeper identities, as we call them, can remain in CDPs for months, in some cases over a year. They deposit small amounts of money here and there while interacting with the website or mobile app like a real customer would. Once their credit worthiness gets a bump, and they qualify for a loan or line of credit, pay day is imminent. The fraudster performs a “bust-out,” or “hit-and-run.” The money is spent, and the bank is left with uncollectible debt.

This is not your grandmother’s synthetic identity. Such intelligence and cunning is the handiwork of synthetic fraud’s latest iteration: the SuperSynthetic™ identity.

SuperSynthetic, super slippery

How are synthetic fraudsters turning CDPs into their own personal clubhouses? Look no further than SuperSynthetic identities. The malevolent offspring of the ongoing generative AI explosion, SuperSynthetics are growing exponentially. In Deduce’s most recent Index, 828,095 SuperSynthetic identities are being tracked in the identity graph. These are hitting companies, especially banks, with costly smash-and-grabs at an unprecedented rate.

SuperSynthetics aren’t high on style points, but why opt for a brute force approach if you don’t need to? These methodical fraudsters are more than content playing the long game. Covering all of their bases allows for such patience—their credit history is legit; their identity is realistically aged and geo-located; and, for good measure, they can deepfake their way past selfie, video, or document verification.

Even the sharpest of real-time fraud detection solutions are unlikely to catch a SuperSynthetic. The usual hallmarks—an IP address or credit card being used for multiple accounts, behavioral changes over time—aren’t present. A SuperSynthetic is far too pedestrian to raise eyebrows, depositing meager dollar amounts over several months, regularly checking its account balance, paying bills and otherwise transacting innocuously until, finally, its reputation earns a credit card or loan offer.

Once the loan is transferred, or the credit card is acquired, it’s sayonara. The identity cashes out and moves on to the next bank. After all, the fraudster does not care about their credit score for that identity, one of dozens or hundreds they are manipulating. It has done its job and will be sacrificed for a highly profitable return.

Fake identities, real problems

Deduce estimates that 3-5% of financial services and fintech new accounts onboarded within the past two years are SuperSynthetic identities. Failing to detect these sleeper identities in a CDP hurts companies in a multitude of ways, all of which tie back to the bottom line.

Per the Wakefield report, 20% of senior US fraud and risk execs say synthetic fraud incidents rack up between $50K-$100K per incident. 23% put the number at $100K+. The low end of this range sitting at a whopping $50K should be alarming enough to reconsider preemptive counter measures against CDP breaches.

Another downside of synthetic infiltration is algorithm poisoning. Since the data for synthetic “customers” is inherently fake, this skews the models that drive credit decisioning. Risky applicants can be mistakenly offered loans, or vice versa. For banks, financial losses from algorithm poisoning are two-fold: erroneously extending credit to fake or unworthy customers; and bungling opportunities to extend credit to the right customers.

A signature approach

The good news for financial services organizations (and their CDPs) is the battle against synthetic, and even SuperSynthetic, identities is not a futile one. The same strategy that’s effective in singling out synthetic identities pre-NAO (New Account Opening) can help spot synthetics that have already breached CDPs.

Even if a SuperSynthetic has already bypassed fraud detection at the account opening stage, gathering identity activity from before, during, and after the NAO workflow and analyzing identities collectively, rather than one-by-one, unearths SuperSynthetic behavioral patterns.

Traditional fraud prevention tools take an individualistic approach, doubling down on static data such as device, email, IP address, for singular identities. But catching synthetic fraudsters, pre- or post-NAO, calls for tracking dynamic activity data over time. At a high level (literally), this translates to a top-down, or “birdseye,” strategy—powered by an enormous and scalable source of real-time, multicontextual identity intelligence—that verifies identities as a group or signature. Any other plan of attack is doubtful to pick up the synthetic scent.

Per the slide above, a unique activity-backed data set augments the data from a CDP and fraud platform to ferret out synthetic accounts. To catch these slithery fraudsters more data can and should be deployed. Knowing how an identity behaved online prior to becoming a customer bolsters the data science models used to give CDPs a synthetic spring cleaning.

What does this look like in practice? Say a real-time scan of in-app customer activity reveals, over an extended period, that multiple identities check their account balance every Thursday at exactly 8:17 a.m. Patterns such as this rule out coincidence and uncover the otherwise clandestine footprints of SuperSynthetic identities.

The intelligence and elusiveness of SuperSynthetics are increasing at a breakneck pace. In addition to terrorizing CDPs, SuperSynthetics have the potential to peddle sports betting accounts, carry out financial aid scams, and even swing the stock market via disinformation campaigns. Given what’s at stake, not combating SuperSynthetics with a thorough activity-driven approach, for some companies, might spell serious trouble in the year ahead.

College students are lifelong learners. So are AI-powered fraudsters.

With each passing day AI grows more powerful and more accessible. This gives fraudsters the upper hand, at least for now, as they roll out legions of AI-powered fake humans that even governmental countermeasures—such as the Biden administration’s recent executive order—will be lucky to slow down.

Among other nefarious activities, bad actors are leveraging AI to peddle synthetic bank and online sports betting accounts, swing elections, and spread disinformation. They’re also fooling banks with another clever gimmick: posing as college freshmen.

College students, particularly underclassmen, have long been a target demographic for banks. Fraudsters are well aware and know that banks’ yearning for customer acquisition, coupled with their inadequate fraud prevention tools, present an easy cash-grab opportunity (and, perhaps, a chance to revisit their collegiate years).

Early bank gets the bullion

The appeal of a new college student from a customer acquisition perspective can’t be understated.

A young, impressionable kid is striking out on their own for the first time. They need a credit card to pay for both necessary and unnecessary things (mostly the latter). They need a bank. And their relationship with that bank? There’s a good chance it will outlast most of their romantic relationships.

This could be their bank through college, through their working years, the bank they procure a loan from for their first house, the bank they encourage their kids and grandkids to bank with. In a college freshman banks don’t just land one client, but potentially an entire generation of clients. Lifetime value up the wazoo.

Go to any college move-in day and you’ll spot bank employees at tables, using giveaway gimmicks to attract students to open up new accounts. According to the Consumer Financial Protection Bureau, 40% of students attend a college that’s contractually linked to a specific bank. However, as banks shovel out millions so they can market their products at universities, a fleet of synthetic college freshmen lie in wait, with the potential to collectively steal millions of their own.

Playing the part

Today’s fraudsters are master identity-stealers who can dress up synthetic identities to match any persona.

In the case of a fake college freshman, building the profile starts off in familiar fashion: snagging a dormant social security number (SSN) that’s never been used or hasn’t been used in a while. Like many forms of Personally Identifiable Information (PII), stolen SSNs from infants or deceased individuals are readily available on the dark web.

From here, fraudsters can string together a combination of stolen and made-up PII to create a synthetic college freshman identity that qualifies for a student credit card. No branch visit necessary, and IDs can be deepfaked. The synthetic identity makes small purchases and pays them off on time—food, textbooks, phone bill—building trust with the bank and improving their already respectable credit score of around 700. They might sign up for an alumni organization and/or apply for a Pell Grant to further solidify their collegiate status.

Pell Grants, of course, require admission to a college—a process that, similar to acquiring a credit card from a bank, is easy pickings for synthetic fraudsters.

The ghost student epidemic

Any bank that doesn’t take the synthetic college freshman use case seriously should study the so-called “ghost student” phenomenon: fake college enrollees that rob universities of millions. 

In California, these synthetic students, who employ the same playbook as bank-swindling synthetics, comprise 20% of community college applications alone (more than 460K). Thanks to an increased adoption of online enrollment and learning post-pandemic, relaxed verification protocols for household income, and the proliferation of AI-powered fake identities, ghost students can easily grab federal aid and never have to attend class.

Like ghost students, synthetic college freshmen can apply for a credit card without ever stepping foot inside a bank branch. Online identity verification is a breeze for the seasoned bad actor. Given the democratization of powerful generative AI tools, ID cards and even live video interviews over Zoom or another video client can be deepfaked.

A (SuperSynthetic) tale old as time

Both the fake freshmen and ghost student problems are symptomatic of a larger issue: SuperSynthetic™ identities.

SuperSynthetic bots are the most sophisticated yet. Forget the brute force attacks of yore; SuperSynthetics are incredibly lifelike and patient. These identities play nice for several months or even years, building trust by paying off credit card transactions on time and otherwise interacting like a real human would. But, once the bank offers a loan and a big payday is in sight, that SuperSynthetic is out the door.

An unorthodox threat like a SuperSynthetic identity can’t be thwarted by traditional fraud prevention tools. Solutions reliant on individualistic, static data won’t cut it. Instead, banks (and universities, in the case of ghost students) need a solution powered by scalable and dynamic real-time data. The latter approach verifies identities as a group or signature: the only way to pick up on the digital footprints left behind by SuperSynthetics.

As human as SuperSynthetic identities are, they aren’t completely infallible. With a “birds eye” view of identities, patterns of activities—such as SuperSynthetics commenting on the same website at the exact same time every week over an extended period—quickly emerge.

Fake college students are one of the many SuperSynthetic personas capable of tormenting banks. But it isn’t the uphill battle it appears to be. If banks change their fraud prevention philosophy and adopt a dynamic, birds eye approach, they can school SuperSynthetics in their own right.

Synthetic fraud remains the elephant in the room

The Biden administration’s recent executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” naturally caused quite a stir among the AI talking heads. The security community also joined the dialog and expressed varying degrees of confidence in the executive order’s ability to protect the federal government and private sector against bad actors.

Clearly, any significant effort to enforce responsible and ethical AI use is a step in the right direction, but this executive order isn’t without its shortcomings. Most notable is its inadequate plan of attack against synthetic fraudsters—specifically those created by Generative AI.

With online fraud reaching a record $3.56 billion through the first half of 2022 alone, financial institutions are an obvious target of AI-based synthetic identities. A Wakefield report commissioned by Deduce found that 76% of US banks have synthetic accounts in their database, and a whopping 86% have extended credit to synthetic “customers.”

However, the shortsightedness of the executive order also carries with it a number of social and political ramifications that stretch far beyond dollars and cents.

Missing the (water)mark

A key element of Biden’s executive order is the implementation of a watermarking system to differentiate between content created by humans and AI, a topical development in the wake of the SAG-AFTRA strike and the broader artist-versus-AI clash. Establishing provenance of an object via a digital image or signature would seem like a sensible enough solution to identifying AI-generated content and synthetic fraud, that is, if all of the watermarking mechanisms currently at our disposal weren’t utterly unreliable.

A University of Maryland professor, Soheil Feizi, as well as researchers at Carnegie Mellon and UC Santa Barbara, circumvented watermarking verification by adding fake imagery. They were able to remove watermarks just as easily.

It’s also worth noting that the watermarking methods laid out in the executive order were developed by big tech. This raises concerns around a walled-garden effect in which these companies are essentially regulating themselves while smaller companies follow their own set of rules. And don’t forget about the fraudsters and hackers who, of course, will gladly continue using unregulated tools to commit AI-powered synthetic fraud, as well as overseas bad actors who are outside US jurisdiction and thus harder to prosecute.

The deepfake dilemma

Another element of many synthetic fraud attacks, deepfake technology, is addressed in the executive order but a clear-cut solution isn’t proposed. Deepfaking is as complex and democratized as ever—and will only grow more so in the coming years—yet the executive order falls short of recommending a plan to continually evolve and keep pace.

Facial recognition verification is employed at the government and state level, but even novice bad actors can use AI to deepfake their way past these tools. Today, anyone can deepfake an image or video with a few taps. Apps like FakeApp can seamlessly integrate someone’s face into an existing video, or generate an entirely new one. As little as a cropped face from a social media image can spawn a speaking, blinking, head-moving entity. Uploaded selfies and live video calls pass with flying colors.

In this era of remote customer onboarding, coinciding with unprecedented access to deepfake tools, it behooves executive orders and other legislation to offer a more concrete solution to deepfakes. Finservs (financial services) companies are in the crosshairs, but so are social media platforms and their users; the latter poses its own litany of dangers.

Synthetic fraud: multitudes of mayhem

The executive order’s watermarking notion and insufficient response to deepfakes don’t squelch the multibillion-dollar synthetic fraud problem.

Synthetic fraudsters still have the upper hand. With Generative AI at their disposal, they can create patient and incredibly lifelike SuperSynthetic™ identities that are extremely difficult to intercept. Worse, “fraud-as-a-service” organizations peddle synthetic mule accounts from major banks, and also sell synthetic accounts on popular sports betting sites—new, aged, geo-located—for as little as $260.

More worrisome, amid the rampant spread of disinformation online, is the potential for synthetic accounts to cause social panic and political upheaval.

Many users struggle to identify AI-generated content on X (formerly Twitter), much less any other platform, and social networks charging a nominal fee to “verify” an account offers synthetic identities a cheap way to appear even more authentic  All it takes is one post shared hundreds of thousands or millions of times for users to mobilize against a person, nation, or ideology. A single doctored image or video could spook investors, incite a riot, or swing an election. 

“Election-hacking-as-a-service” is indeed another frightening offshoot of synthetic fraud, to the chagrin of politicians (or those on the wrong side of it, at least). These fraudsters weaponize their armies of AI-generated social media profiles to sway voters. One outfit in the Middle East interfered in more than 33 elections.

Banks or betting sites, social uprisings or rigged elections, unchecked synthetic fraud, buttressed by AI, will continue to wreak havoc in multitudinous ways if it isn’t combated by an equally intelligent and scalable approach.

The best defense is a good offense

The executive order, albeit an encouraging sign of progress, is too vague in its plan for stopping AI-generated content, deepfakes, and the larger synthetic fraud problem. The programs and tools it says will find and fix security vulnerabilities aren’t clearly identified. What do these look like? How are they better than what’s currently available?

AI-powered threats grow smarter by the second. Verbiage like “advanced cybersecurity program” doesn’t say much; will these fraud prevention tools be continually developed so they’re in lockstep with evolving AI threats? To its credit, the executive order does mention worldwide collaboration in the form of “multilateral and multi-stakeholder engagements,” an important call-out given the global nature of synthetic fraud.

Aside from an international team effort, the overarching and perhaps most vital key to stopping synthetic fraud is an aggressive, proactive philosophy. Stopping AI-generated synthetic and SuperSynthetic identities requires a preemptive, not reactionary, approach. We shouldn’t wait for authenticated—or falsely authenticated—content and identities to show up, but rather stop synthetic fraud well before infiltration can occur. And, given the prevalence of synthetic identities, they should have a watermark all their own.

76% of finservs are victims of synthetic fraud

In 1938, Orson Welles’ infamous radio broadcast of The War of the Worlds convinced thousands of Americans to flee their homes for fear of an alien invasion. More than 80 years later, the public is no less gullible, and technology unfathomable to people living in the 1930s allows fake humans to spread false information, bamboozle banks, and otherwise raise hell with little to no effort.

These fake humans, also known as synthetic identities, are ruining society in myriad ways: tampering with electorate polls and census data, disseminating misleading social media posts with real-world consequences, sharing fake articles on Reddit that subsequently skew Large Language Models that drive platforms such as ChatGPT. And, of course, bad actors can leverage fake identities to steal millions from financial institutions.

The bottom line is this: synthetic fraud is prevalent; financial services companies (finservs), social media platforms, and many other organizations are struggling to keep pace; and the impact, both now and in the future, is frighteningly palpable.

Here is a closer look at how AI-powered synthetic fraud is infiltrating multiple facets of our lives.

Accounts for sale

If you need a new bank account, you’re in luck: obtaining one is as easy as buying a pair of jeans and, in all likelihood, just as cheap.

David Maimon, a criminologist and Georgia State University professor, recently shared a video from Mega Darknet Market, one of the many cybercrime syndicates slinging bank accounts like Girl Scout Cookies. Mega Darknet and similar “fraud-as-a-service” organizations peddle mule accounts from major bank brands (in this case Chase) that were created using synthetic identity fraud, in which scammers combine stolen Personally Identifiable Information (PII) with made-up credentials.

But these cybercrime outfits take it a step further. With Generative AI at their disposal, they can create SuperSyntheticTM identities that are incredibly patient, lifelike, and difficult to catch.

Aside from bank accounts, fraudsters are selling accounts on popular sports betting sites. The verified accounts—complete with name, DOB, address, and SSN—can be new or aged and even geo-located, with a two-year-old account costing as little as $260. Perfect for money launderers looking to wash stolen cash.

Fraudsters are selling stolen bank accounts as well as stolen accounts from sports betting sites.

Cyber gangs like Mega Darknet also offer access to the very Generative AI tools they use to create synthetic accounts. This includes deepfake technology which, besides fintech fraud, can help carry out “sextortion” schemes.

X-cruciatingly false

Anyone who’s followed the misadventures of X (formerly Twitter) over the past year, or used any social media since the late 2010s, knows that Elon’s embattled platform is a breeding ground for bots and misinformation. Generative AI only exacerbates the problem.

A recent study found that X users couldn’t distinguish AI-generated content (GPT-3) from human-generated content. Most alarming is that these same users trusted AI-generated posts more than posts from real humans.

In the US, where 20% of the population famously can’t locate the country on a world map, and elsewhere these synthetic accounts and their large-scale misinformation campaigns pose myriad risks, especially if said accounts are “verified.” It wouldn’t take much to incite a riot, or stoke anger and subsequent violence toward a specific group of people. How about sharing a bogus picture of an exploded Pentagon that impacts the stock market? Yep. That, too.

This fake image of an explosion near the Pentagon exemplifies the danger of synthetic accounts spreading misinformation.


Few topics are more timely and can rile up users like election interference, another byproduct of the fake human—and fake social media—epidemic. Indeed, the spreading of false information in service of a particular political candidate or party existed well before social media, but now the stakes have increased exponentially.

If fraud-as-a-service isn’t ominous-sounding enough, election-hacking-as-a-service might do the trick. Groups with access to armies of fake social media profiles are weaponizing disinformation to sway elections any which way. Team Jorge is just one example of these election meddling units. Brought to light via a recent Guardian investigation, Team Jorge’s mastermind Tal Hanan claimed he manipulated upwards of 33 elections.

The rapid creation and dissemination of fake social media profiles and content is far more harmful and widespread with Generative AI in the fold. Flipping elections is one of the worst possible outcomes, but grimmer consequences will arise if automated disinformation isn’t thwarted by an equally intelligent and scalable solution.

Finservs in the crosshairs

Cash is king. Synthetic fraudsters want the biggest haul, even if it’s a slow-burn operation stretched out over a long period of time. Naturally, that means finservs, who lost nearly $2 billion to bank transfer or payment fraud last year, are number one on their hit list. 

Most finservs today don’t have the tools to effectively combat AI-generated synthetic and SuperSynthetic fraud. First-party synthetic fraud—fraud perpetrated by existing “customers”—is rising thanks to SuperSynthetic “sleeper” identities that can imitate human behavior for months before cashing out and vanishing at the snap of a finger. SuperSynthetics can also use deepfake technology to evade detection, even if banks request a video interview during the identity verification phase.

It’s not like finservs are dilly-dallying. In a study from Wakefield, commissioned by Deduce, 100% of those surveyed had synthetic fraud prevention solutions installed along with sophisticated escalation policies. However, more than 75% of finservs already had synthetic identities in their customer databases, and 87% of those respondents had extended credit to fake accounts.

Fortunately for finservs and others trying to neutralize synthetic fraud, it’s not impossible to outsmart generative AI. With the right foundation in place—specifically a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence—and a change in philosophy, even a foe that grows smarter and more humanlike by the second can be thwarted.

This philosophical change is rooted in a top-down, bird’s-eye approach that differs from traditional, individualistic fraud prevention solutions that examine identities one by one. A macro view, on the other hand, sees identities collectively and groups them into a single signature which uncovers a trail of digital footprints. Behavioral patterns such as social media posts and account actions rule out coincidence. The SuperSynthetic smokescreen evaporates.

Whether it’s bad actors selling betting accounts, social media platforms stomping out disinformation, or finservs protecting their bottom lines, fake humans are more formidable than ever with generative AI and SuperSynthetic fraud at their disposal. Most companies seem to be aware of the stakes, but singling out bogus users and SuperSynthetics requires a retooled approach. Otherwise, revenue, users, and brand reputations will dwindle, and the ways in which fake accounts wreak havoc will multiply.

That rise in first-party synthetic fraud is no fluke. You have a SuperSynthetic identity problem.

Online fraud in the US totaled a record-breaking $3.56 billion through the first half of last year. Most consumer-facing companies have done the sensible thing and spent six or seven figures fortifying their perimeter defenses against third-party fraud.

But another effective, and seemingly counterintuitive, strategy for stopping today’s fraudsters is to think inside-out, not just outside-in. In other words, first-party synthetic fraud—or fraud perpetrated by existing “customers”—is threatening bottom lines in its own right, by way of AI–generated synthetic “sleeper” identities that play nice for months before executing a surprise attack.

Banks and other finserv (financial services) companies shouldn’t be surprised if their first-party synthetic fraud is off the charts. Deduce estimates that between 3-5% of new customers acquired in the past year are actually synthetic identities, specifically SuperSyntheticTM identities, created using generative AI.

The good news is that a simple change in philosophy will go a long way in neutralizing synthetic first-party fraudsters before they’re offered a loan or credit card.

First-party problems

Third-party fraud is when bad actors pose as someone else. It’s your classic case of identity theft. They leverage stolen credit card info and/or other credentials, or combine real and fake PII (Personal Identifiable Information) to create a synthesized identity, for financial or material gain. Consequently, the victims whose identities were stolen notice fraudulent transactions on their bank statements, or debt collectors track them down, and it’s apparent they’ve been had.

First-party synthetic fraud is even more cunning—and arguably more frustrating—because the account information and activity appear genuine, complicating the fraud detection process. The aftermath is where it hurts the most. Since, unlike third-party fraud, there isn’t an identifiable victim, finservs have no one to collect the debt from and are forced to bite the bullet.

Image Credit: Experian

One hallmark of first-party synthetic fraud is its patience. These sleeper identities appear legitimate for months, sometimes more than a year, making small deposits every now and then while interacting with the website or app like a real customer. Once they bump up their credit worthiness score and qualify for a loan or line of credit, it’s game over. The fraudster executes a “bust-out,” or “hit-and-run,” spending the money and leaving the bank with uncollectible debt.

This isn’t the work of your average synthetic identity. Such a degree of calculation and human-like sophistication can only be attributed to SuperSynthetic identities.

That escalated quickly

An Equifax report found that nearly two million consumer credit accounts, over the span of a year, were potentially synthetic identities. More than 30% of these accounts represented a major delinquency risk with cases averaging $8K-10K in losses.

The blame for rising first-party synthetic fraud—and the finservs left in its wake—can be placed squarely on the shoulders of SuperSynthetic identities. These AI-generated bots are proliferating worldwide, scaling their sleeper networks to execute bust-outs on a grand scale.

SuperSynthetics—featuring a three-pronged attack of synthetic identity fraud, legitimate credit history, and deepfake technology—need not brute-force their way into a bank’s pockets. Aside from a SuperSynthetic’s patient approach and aged, geo-located identity, its deepfake capability, a benefit of the recent generative AI explosion, is key to securing the long-awaited loan or credit card.

Selfie verification? A video interview? No problem. Deepfake tools, some of them free, are advanced enough to trick finservs even if they have liveness detection in their stack. Document verification? There’s a deepfake for that, too.

SuperSynthetics don’t have a kryptonite, per se. But analyzing identities from a different angle boosts the chances of a finserv spotting SuperSynthetics before they can circumvent the loan or credit verification stage.

Dusting for fingerprints

If finservs want to sniff out SuperSynthetic identities and successfully combat first-party synthetic fraud, they can’t be afraid of heights.

A top-down, bird’s-eye view is the best way to uncover the digital fingerprints or signatures of SuperSynthetics. Individualistic fraud prevention tools overlook these behavioral patterns, but a macro approach, which studies identities collectively, illuminates forensic evidence like a black light.

A top-down view reveals digital fingerprints that otherwise would go undetected.

Grouping identities into a single signature—and examining them alongside millions of fraudulent identities—reveals indisputable evidence of SuperSynthetic activities such as social media posts and account actions that consistently happen at the exact day and time each week by a group or signature of identities. Coincidence is out of the question.

Of course, not every finserv has the firepower to adopt this strategy. In order to enable a big-picture view, companies’ anti-fraud stacks need a large and scalable source of real-time, multicontextual, activity-backed identity intelligence.

There are other avenues. Consider, for example, the only 100-percent foolproof solution to first-party synthetic fraud: in-person identity verification. Even if this approach was used exclusively at the pre-loan juncture it seems unlikely that many companies would take on the added friction, though driving down to the bank is a small price to pay for a five or ten thousand-dollar loan.

If finservs don’t wish to revisit the good old days of face-to-face verification, the top-down, signature approach is the only other viable deterrent to first-party synthetic fraud. Solutions that analyze identities one by one won’t stop SuperSynthetics before a loan or credit card is granted, and by that point it’s already over.

An old-school approach could be the answer for finservs

For many people, video conferencing apps like Zoom made work, school, and other everyday activities possible amid the global pandemic—and more convenient. Remote workers commuted from sleeping position to upright position. Business meetings resembled “Hollywood Squares.” Business-casual meant a collared shirt up top and pajama pants down low.

Fraudsters were also quite comfortable during this time. Unprecedented amounts of people sheltering in place naturally caused an ungodly surge in online traffic and a corresponding increase in security breaches. Users were easy prey, and so were many of the apps and companies they transacted with.

In the financial services (finserv) sector, branches closed down and ceased face-to-face customer service. Finserv companies relied on Zoom for document verification and manual reviews, and bad actors, armed with stolen credentials and improved deepfake technology, took full advantage.

Even in the face of AI-Generated identity fraud most finservs still use remote identity verification to comply with regulator KYC requirements, and when it comes time to offer a loan. It’s easier than meeting in person, and what customer doesn’t prefer verifying their identity from the comfort of their couch?

But AI-powered synthetic identities are getting smarter and, while deepfake deterrents are closing the gap, a return to an old-school approach remains the only foolproof option for finservs.

Deepfakes, and the SuperSynthetic™ quandary

Gen AI platforms such as ChatGPT and Bard, coupled with their nefarious brethren FraudGPT and WormGPT and the like, are so accessible it’s scary. Everyday users can create realistic, deepfaked images and videos with little effort. Voices can be cloned and manipulated to say anything and sound like anyone. The rampant spread of misinformation across social media isn’t surprising given that nearly half of people can’t identify a deepfaked video.

More disturbing: deepfaked Mona Lisa, or that someone made this 3+ years ago?

Finserv companies are especially susceptible to deepfaked trickery, and bypassing remote identity verification will only get easier as deepfake technology continues to rapidly improve.

For SuperSynthetics, the new generation of fraudulent deepfaked identities, fooling finservs is quite easy. SuperSynthetics—a one-two-three punch of deepfake technology and synthetic identity fraud and legitimate credit histories—are more humanlike and individualistic than any previous iteration of bot. The bad actors who deploy these SuperSynthetic bots aren’t in a rush; they’re willing to play the long game, depositing small amounts of money over time and interacting with the website to convince finservs they’re prime candidates for a loan or credit application.

When it comes time for the identity verification phase, SuperSynthetics deepfake their documents, selfie, and/or video interview…and they’re in.

An overhaul is in order

Deepfake technology, which first entered the mainstream in 2018, is still relatively infantile yet pokes plenty of holes in remote identity verification.

The “ID plus selfie” process, as Gartner analyst Akif Khan calls it, is how most finservs are verifying loan and credit applicants these days. The user takes a picture of their ID or driver’s license, authenticity is confirmed, then the user snaps a picture of themselves. The system checks the selfie for liveness and makes sure the biometrics line up with the photo ID document. Done.

The process is convenient for legitimate customers and fraudsters alike thanks to the growing availability of free deepfake apps. Using these free tools, fraudsters can deepfake images of docs and successfully pass the selfie step, most commonly by executing a “presentation attack” in which their primary device’s camera is aimed at the screen of a second device displaying a deepfake.

Khan advocates for a layered approach to deepfake mitigation, including tools that detect liveness and check for certain types of metadata. This is certainly on point, but there’s an old-school, far less technical way to ward off deepfaking fraudsters. Its success rate? 100%.

The good ol’ days

Remember handshakes? How about eye contact that didn’t involve staring into a camera lens? These are merely vestiges of the bygone in-person meetings that many finservs used to hold with loan applicants pre-COVID.

Outdated, and less efficient, as face-to-face meetings with customers might be, they’re also the only rock-solid defense against deepfakes.

Not even advanced liveness detection is a foolproof deepfake deterrent.

Sure, the upper crust of finserv companies likely have state-of-the-art deepfake deterrents in place (i.e., 3D liveness detection solutions). But liveness detection doesn’t account for deepfaked documents or, more importantly, video, or the fact that the generative AI tools available to fraudsters are advancing just as fast as vendor solutions, if not faster. It’s a full-blown AI arms race, and with it comes a lot of question marks.

In-person verification (only for high-risk activities) puts these fears to bed. Is it frictionless? Obviously far from it, though workarounds, such as traveling notaries that meet customers at their residence, help ease the burden. But if heading down to a local branch for a quick meet-and-greet is what it takes to snag a $10K loan, will a customer care? They’d probably fly across state lines if it meant renting a nicer apartment or finally moving on from their decrepit Volvo.

Time to layer up

Khan’s recommendation, for finservs to assemble a superteam of anti-deepfake solutions, is sound, so long as companies can afford to do so and can figure out how to orchestrate the many solutions into a frictionless consumer experience. Vendors indeed have access to AI in their own right, powering tools that directly identify deepfakes through patterns, or that key in on metadata such as the resolution of a selfie. Combine these with the most crucial layer, liveness detection, and the final result is a stack that can at the very least compete against deepfakes.

SuperSynthetics aren’t as easy to neutralize. In previous posts, we’ve advocated for a “top-down” anti-fraud solution that spots these types of identities before the loan or credit application stage. Contrary to individualistic fraud prevention tools, this bird’s-eye view reveals digital fingerprints—concurrent account activities, simultaneous social media posts, etc.—that otherwise would go undetected.

In the meantime, it doesn’t hurt to consider the upside of an in-person approach to verifying customer identities (prior to extending a loan, not onboarding). No, it isn’t flashy, nor is it flawless. However, it is reliable and, if finservs effectively articulate the benefit to their customers—protecting them from life-altering fraud—chances are they’ll understand.

Customer or AI-Generated Identity? The lines are as blurry as ever.

Today’s fraudsters are truly, madly, deeply fake.

Deepfaked identities, which use AI-generated audio or visuals to pass for a legitimate customer, are multiplying at an alarming rate. Banks and other fintech companies—who collectively lost nearly $2 billion to bank transfer or payment fraud in 2022, are firmly in their crosshairs.

Sniffing out deepfaked chicanery isn’t easy. One study found that 43% of people struggle to identify a deepfaked video. It’s especially concerning that this technology is still relatively infantile and already capable of luring consumers and businesses into fraudulent transactions.

Over time, deepfakes will seem increasingly less fake and much harder to detect. In fact, an offshoot of deepfaked synthetic identities, the SuperSynthetic™ identity, has already emerged from the pack. Banks and financial organizations have no choice but to stay on top of developments in deepfake technology and swiftly adopt a solution to combat this unprecedented threat.

Rise of the deepfakes

Deepfakes have come a long way since exploding onto the scene roughly five years ago. Back then, deepfaked videos aimed to entertain. Most featured harmless superimpositions of one celebrity’s face onto another, such as this viral Jennifer Lawrence-Steve Buscemi mashup.

The trouble started when users began deepfaking sexually explicit videos, opening up a massive can of privacy- and ethics-related worms. Then a 2018 video of a deepfaked Barack Obama speech showed just how dangerous the technology could be.

Image Credit: DHS

The proliferation and growing sophistication of deepfakes over the past five years can be attributed to the democratization of AI and deep learning tools. Today, anyone can doctor an image or video with just a few taps. FakeApp and Lyrebird and countless other apps enable smartphone users to seamlessly integrate someone’s face into an existing video, or generate a new video that can easily pass for the real deal.

Given this degree of accessibility, the threat of deepfakes to banks and fintech companies will only intensify in the months and years ahead. The specter of new account fraud, perpetrated by way of a deepfaked synthetic identity, looms large in the era of remote customer onboarding.

This is a stickup

Synthetic identity fraud, in which bad actors invent a new identity using a combination of stolen and made-up credentials, has already cost banks upwards of $6 billion. Deepfake technology only adds fuel to the fire.

A deepfaked synthetic identity heist doesn’t require any heavy lifting. A fraudster crops someone’s face from a social media picture and they’re well on their way to spawning a lifelike entity that speaks, blinks, and moves its head on screen. Image- or video-based identity verification, KYC protocol designed to deter potential fraud before an account is opened or extended credit, is moot. The fraudster’s uploaded selfie will be a dead ringer for the face on the ID card. Even a live video conversation with an agent is unlikely to ferret out a deepfaked identity.

Not even Columbo can spot a deepfaked synthetic identity.

Audio-based verification processes are circumvented just as easily. Exhibit A: the vulnerability of the voice ID technology used by banks across the US and Europe, ostensibly another layer of login security that prompts users to say some iteration of, “My voice is my password.” This sounds great in theory, but AI-generated audio solutions can clone anyone’s voice and create a virtually identical replica. One user, for example, tapped voice creation tool ElevenLabs to clone his own voice using an audio sample. He accessed his account in one try.

In this use case, the bad actor would also need a date of birth to access the account. But, thanks to frequent big-time data leaks—such as the recent Progress Corp breach—dates of birth and other Personally Identifiable Information (PII) are readily available on the dark web.

Here come the SuperSynthetics

In deepfaked synthetic identities, banks and financial services platforms clearly face a formidable foe. But this worthy opponent has been in the gym, protein-shaking and bodyhacking itself into something stronger and infinitely more dangerous: the SuperSynthetic identity.

SuperSynthetic identities, armed with the same deepfake capabilities as regular synthetics (and then some), bring an even greater level of Gen AI-powered smarts to the table. No need for a brute force attack. SuperSynthetics operate with a sophistication and discernment that is so lifelike it’s spooky. In this regard, one must only look at the patience of these bots.

SuperSynthetics are all about the long con. Their aged and geo-located identities play nice for months, engaging with the website and making small deposits here and there, enough to appear human and innocuous. Once enough of these transactions accumulate, and trust is gained from the bank, a credit card or loan is extended. Any additional verification is bypassed via deepfake, of course. When the money is deposited into their SuperSynthetic account the bad actor immediately withdraws it, along with their seed money, before finding another bank to swindle.

How prevalent are SuperSynthetics? Deduce estimates that between 3-5% of financial services accounts onboarded within the past year are in fact SuperSynthetic “sleepers” waiting to strike. It certainly warrants a second look at how customers are verified before obtaining a loan or credit card, including the consideration of in-person verification to rule out any deepfake activity.

No time like the present

If deepfaked synthetic identities don’t call for a revamped cybersecurity solution, deepfaked SuperSynthetic identities will certainly do the trick. Our money is on a top-down approach that views synthetic identities collectively rather than individually. Analyzing synthetics as a group uncovers their digital footprints—signature online behaviors and patterns too consistent to suggest mere coincidence.

Whatever banks choose to do, kicking the can down the road only works in favor of the fraudsters. With every passing second, the deepfakes are looking (and sounding) more real.

Time is a-tickin’, money is a-burnin, and customers are a-churnin’.