Gen AI fraud flusters marketers, fraud teams, and customers

Big tech continues to tout the unprecedented intelligence and endless potential of generative AI. And for good reason: It’s tough to ignore the efficiency and reliability of a Gen AI superbrain that doesn’t sleep, call off, or overdrink at the office holiday party.

But as Gen AI gets smarter so do fraudsters. Fraud is already up 20% year-over-year, and the accessibility of AI has proliferated synthetic identities to a startling degree. 

Impersonation fraud, which includes synthetic “Frankenstein” identities consisting of real and fake PII (Personally Identifiable Information), accounts for 85% of all fraud. Synthetic identities are so prevalent that even Vanity Fair has likened it to “a Kafkaesque nightmare.”

Synthetics, bolstered by deepfake technology and realistic account activity, are nearly impossible to catch. Friend or foe? Real or fake? These questions are pulling marketing and fraud teams in opposite directions, and it’s customers (and businesses) who are paying the price.

SuperSynthetic™, super problematic

As of Q1 2022, one out of every four new accounts were fake. One can imagine how much that number has increased given the AI and synthetic fraud surge. The auto lending industry was hit the hardest in 2023, seeing a 98% spike in synthetic attempts to the tune of $7.9B in losses.

Once synthetics make it past the account verification stage it’s essentially game over. Shockingly, more than 87% of companies have extended credit to synthetic customers, and 76% of US banks have synthetic accounts in their database.

Traditional synthetic identities are hard enough to stop with their convincing mishmash of real and made-up PII, but their mighty offspring—SuperSynthetic™ identities—pack an even bigger punch.

Perhaps “mighty” is too strong a word considering the SuperSynthetic trademark is its monk-like patience. A fully automated SuperSynthetic identity plays the long game, making small deposits, checking account balances, and otherwise performing humanlike actions over the course of several months. Once enough trust is built, and a line of credit is extended, these fake customers transfer out their funds and exit stage left.

The trickery of SuperSynthetic identities isn’t limited to finservs. Colleges are now dealing with fake students, fake information on social media is flipping elections, and seemingly any platform utilizing an account creation workflow is vulnerable.

Banks are still the primary target, however, much to the chagrin of their marketing and fraud teams.

A churning sensation

There’s nothing wrong with tightening a leaky faucet, but overtightening can cause another leak. Similarly, “fixing” a synthetic identity problem by dialing up the fraud controls to 11 leads to more harm than good.

Indeed, many engineers on fraud teams are constricting their algorithms so rigidly that even slightly suspicious activity is flagged. VPN use, for example, is a callout despite the ubiquity of VPNs among today’s users. Innocuous shorthand of addresses (Main Street vs. Main St.) and names (Andy vs. Andrew) can also tip off jumpy fraud algos. A sign of the times, what used to be low risk is now classified as medium risk and formally medium risk is now high risk. 

False positives. ID verification. Manual reviews. Overly stringent fraud defenses annoy marketers and users like none other. The friction is often too unbearable for customers who would rather jump ship than jump through account verification hoops. Consumers, who expect instant gratification in today’s online market, don’t want to hear “Thanks for your application, we are reviewing it and will be in touch.” They’ll quickly start an application at a competing financial institution where they can receive instant credit.

The Deduce team has witnessed this friction firsthand. Our CTO, a customer of his bank for more than two decades, was forced to undergo document verification while using an account, device, and network that had previously been affirmed. Our VP of Marketing, a United Airlines customer for over three decades, was challenged on the United app for a CA-to-NY flight after he had already boarded the plane, passed TSA PreCheck, and scanned his boarding pass.

Friction is nightmarish for marketers as well, who have virtually no shot at meeting their customer acquisition KPIs. As shown in the image above, AI-powered synthetic fraud—and the rigid counterattacks used against it—leads to a three-pronged cluster-you-know-what: (a) more fraud, (b) more invasive verification checks that cost substantially more, and (c) more user friction that leads to account or loan abandonment and impacts lifetime value and customer acquisition costs.

Trust or bust

The key to ferreting out synthetic identities is to do the work early. Leverage real-time, multicontextual, activity-backed identity intelligence to stomp out synthetics pre-account creation.

Deduce employs the infrastructure, and strategy, that epitomize this preemptive solution. By taking a high-level, “signature” approach that differs from individualistic fraud tools, Deduce uncovers hidden digital footprints. Lifelike as synthetic fraudsters are, spotting cohorts of users that post on social media and perform identical account actions at the same time and day each week rules out the possibility of legitimacy.

Fraud teams can refrain from ratcheting up their algos knowing that Deduce’s trust scores are 99.5% accurate. If Deduce deduces a user is trustworthy, it’s seen that identity with recency and frequency via multiple trust signals, including, among others, device, network, geo location, IP, and a “VPN affinity” signal that identifies longtime VPN users.

47% of the 920M identities in the Deduce Identity Graph are trusted. In fact, Deduce is the only vendor in the market that returns a trusted score for an identity. Others offer a “low risk” score, which is risky enough for many fraud managers to flag, resulting in a false positive.  

Neutralizing synthetic fraud starts with trust, and it starts early. If you want to keep your marketing team and customers happy, and avoid the losses that come with overaggressive fraud controls, go the preemptive route—before things take a “churn” for the worse.

Some assembly required, but not much

You can be anyone you want to be.

Utter these words to your average cynic and their eyes will roll out of their sockets. But, thanks to AI, this phrase is now more truism than affirmation.

For fraudsters, AI may as well be a giant check from Publishers Clearing House. AI-generated synthetic identities net hefty payouts with minimal effort. Bad actors can seamlessly create and orchestrate synthetic identities at scale to fake out banks, execute election hacking schemes, or any other plot requiring AI-powered chicanery.

How does one go about creating a synthetic identity? It’s easier, and more lucrative, than you might think: Arkose Labs estimates that one in four new accounts is fake, and reports that these fake bots and users steal $697B annually.

We’ve outlined the steps to making a synthetic identity below. (Insert “Don’t try this at home” disclaimer here.) No, we’re not trying to add more reinforcements to the growing army of AI-generated fraudsters. Just making sure banks and other finservs grok the magnitude of this cunning and highly intelligent cyberthreat.

Let’s dig in.

Step one: breaching bad

Creating synthetic identities begins with a big bang. A sizable breach occurs, like the recent AT&T heist affecting 70M+ total customers, and oodles of PII (personally identifiable information) are stolen and subsequently sold on the dark web. (A cursory look on Telegram will surface half a dozen “data brokers” offering data from AT&T.)

Recently deceased people’s SSNs (social security numbers) and infant SSNs are another crowd favorite for fraudsters. After all, the first group won’t need them again and the second group likely won’t need it for a couple of decades. In fact, Equifax—yes, the one from the 2017 data breach involving 147M stolen identities—recently announced they had 1.8M fake identities with SSNs in their database. 

PII is the lifeblood of any synthetic identity, and the dark web is essentially a flea market where the basic building blocks of a synthetic ID—first names, last names, SSNs, and DOBs (dates of birth)—can be purchased for pennies on the dollar.

On the dark web a synthetic fraudster buys a large batch of PII, usually tens of thousands of identities’ worth. Using stolen SSNs they can access FICO data, without triggering an alert to the legitimate owner of the number, then leverage AI to organize the thousands of identities by credit score (less than 600, 600-700, 700-800, etc.). Identities with scores below 700 would be matched to activities that bolster credit scores such as making charitable donations, and applying for and paying sub-prime or same-day loans. This essentially amounts to “pig butchering” their credit score over 700. (More on this later.)

Step two: signs of life

Next up: It’s time to give this fake human a “pulse.”

The first priority is to add an email address, ideally an aged and geo-located email address. Penny pinchers can create a free email address, but in either case the fraudster communicates via this email to build credibility. The email would also need to be matched with an identity in the same geography. It helps if the stolen identity boasts a high enough credit score to convince banks they’re onboarding an attractive new customer, but some fraud opportunities (such as subprime lending) don’t require a top-notch credit score.

Step two is to nab a new phone number, which comes in handy for authentication purposes. Apply the phone number to a cheap Boost Mobile phone or the like, and that’s enough to bypass 2FA (two-factor authentication) and OTP (one-time passcodes).

A fraudster using multiple smartphones to manage synthetic identities

Once the new synthetic account is live, it must avoid suspicion by interacting and existing online like a real human would. Filling out the rest of their profile details. Chatting with online support agents on e-commerce websites. Clicking the ads on a bank’s website that offer opportunities to apply for credit cards and loans.

Fake identities can further legitimize themselves by building out social media profiles on platforms like X or LinkedIn. Who’s to say they don’t actually work for IBM or some other Fortune 100 stalwart, or weren’t a graduate of Harvard or some other Ivy League school? Do customer onboarding teams have time to poke holes? Probably not.

Step three: building credit

After a synthetic identity acts like it’s a real human, all that’s left is to build credit before cashing out and moving on to another unsuspecting bank.

This is the trademark of the latest iteration of synthetics, known as SuperSynthetic™ identities. Rather than putting on ski masks and bull-rushing banks, SuperSynthetics prefer to take their sweet time.

Over the course of several months, a SuperSynthetic bot leverages its AI-generated identity to digitally deposit small amounts of money. In the meantime, it interacts with website and/or mobile app functions so as to not raise suspicion. SuperSynthetics might also build credit history by paying off cheap payday loans, and donating to charities that tie activity to its stolen SSN. While these modest deposits accumulate, the SuperSynthetic identity continues to consistently access its bank account (checking its balance, looking at statements, etc.).

The next generation of bots: SuperSynthetic identities

Eventually, the reputation of a “customer in good standing” is achieved. The identity’s credit risk worthiness score increases. A credit card or loan is extended. The fraudster starts warming up the getaway car.

Months of patience (12-18 months, on average) finally pays off when the bank deposits the loan or issues the credit card and the synthetic identity cashes out. It’s a systematic, slow-burn operation and it’s executed at scale. SuperSynthetic “sleeper” identities are actively preying on banks and finservs—by the thousands.

What now?

AI-powered synthetic and SuperSynthetic identities are wicked-smart, as are the criminal enterprises deploying them en masse. These aren’t black-hoodied EECS dropouts operating out of mom’s basement; the humans behind fake humans are well-funded and know who to target, namely smaller financial organizations like credit unions that lack the data and extensive fraud stacks and teams of a Bank of America or Chase.

Individuals aren’t safe either. Today’s fraudsters are leveraging social engineering and conversational AI tools such as ChatGPT to swindle regular Joes and Jans. Take “pig butchering” scams, for example. These drawn-out schemes start as a wrong number text message before, weeks or months later, recipients are tricked into making bogus crypto investments.

And if synthetic or SuperSynthetic identities require an extra layer of trickery, they can always count on deepfakes. Generative AI has elevated deepfakes to hyperrealistic proportions. To wit: A finance worker in China wired $25M following a video call with a deepfaked CFO.

Creating a synthetic identity is easy. Stopping one is tough. But the latter isn’t a lost cause.

A hefty amount of real-time, multicontextual, activity-backed identity intelligence is just what the synthetic fraud doctor ordered. That’s part of the solution, at least. Banks also need to switch up the philosophical approach underpinning their security stacks. The optimal of which is a “top-down” strategy that analyzes synthetic identities collectively rather than individually.

Doing this preemptively—prior to account creation—detects signature online behaviors and patterns of synthetic identities that otherwise would get lost in the sauce. Coincidence is ruled out. Synthetics are singled out.

If banks and finservs have any chance of neutralizing the newest evolution of synthetic fraudsters, this is the ticket. But the clock is ticking. SuperSynthetic identities grow in strength and number by the day. Businesses may not feel the damage for weeks or even months, but that long dynamite fuse culminates in a big, and possibly irreversible, boom.

Celebrities, politicians, and banks face a deepfake dilemma

We’re reaching the “so easy, a caveman can do it” stage of the deepfake epidemic. Fraudsters don’t need a computer science degree to create and deploy armies of fake humans, nor will it drain their checking account (quite the opposite).

As if deepfake technology wasn’t accessible enough, the recent unveiling of OpenAI’s Sora product only simplifies—and complicates—matters. Sora, which for now is only available to certain users, produces photorealistic video scenes from text prompts. Not to be outdone, Alibaba demonstrated their EMO product making the Sora character sing. The lifelike videos created by such deepfake platforms fool even the ritziest of liveness detection solutions.

AI-powered fraud isn’t flying under the radar anymore—the prospect of taxpayers losing upwards of one trillion dollars will do that. One burgeoning scam, known as pig butchering, was featured on an episode of John Oliver. These scams start as a wrong number text message and, over the course of weeks or months, lure recipients into bogus crypto investments. Conversational generative AI tools like ChatGPT, combined with clever social engineering, make pig butchering a persuasive and scalable threat. Accompanying these texts with realistic deepfaked images only bolsters the perceived authenticity.

Companies are taking notice, too. So is the Biden administration, though its executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” in late 2023 didn’t sufficiently address synthetic fraud—specifically cases involving Generative AI and deepfakes.

The damage caused by AI-generated, deepfaked identities continues to worsen. Here is how it has permeated seemingly every facet of our lives, and how banks can stay one step ahead.

Hacking the vote

The 2024 presidential election is shaping up to be quite the spectacle, one that will capture the eyes of the world and, in all likelihood, further sever an already divided populace. Citizens exercising their right to vote is crucial, but the advancement of deepfake technology raises another concern: are voters properly informed?

Election-hacking-as-a-service sounds like the work of dystopian fiction, but it’s just the latest threat politicians and their constituents need to worry about. Highly sophisticated factions—in the US and abroad—are leveraging generative AI and deepfakes to weaponize disinformation and flip elections like flapjacks.

Some election meddlers have changed the outcome of 30+ elections. Remember the deepfaked Biden robocall ahead of the New Hampshire primary? That’s the handiwork of an election hacking superteam. A personalized text message or email might not be from [insert candidate here]. A video portraying an indecent remark could be fabricated. Some voters may say they’re “leaning” towards voting yay or nay on Measure Y or Prop Z, when in actuality they’re being pushed in either direction by synthetic election swingers.

In February, a slew of tech behemoths signed an accord to fight back against AI-generated election hacking. Like Biden’s executive order, the accord is a step in the right direction; time will tell if it pays dividends.

The case of the deepfaked CFO

Deepfaked audio and video is convincing enough to sway voters. It can also dupe multinational financial firms out of $25 million—overnight.

Just ask the Hong Kong finance worker who unknowingly wired about $25.6 million to fraudsters after attending a video conference call with who he thought were his fellow colleagues. A synthetic identity posing as the company’s CFO authorized the transactions—15 total deposits into five accounts—which the worker discovered were fraudulent after checking in with his corporate office.

It appears the bad actors used footage of past video conferences to create the deepfaked identities. Data from WhatsApp and emails helped make the identities look more legitimate, which shows the lengths these deepfaking fraudsters are willing to go.

A couple of years ago, fraudsters would have perpetrated this attack in a simpler fashion, via phishing, for example. But with the promise of bigger paydays, and much less effort and technical knowhow required thanks to the ongoing AI explosion, cyber thieves have every incentive to deepfake companies all the way to the bank.

The Taylor Swift incident

Celebrities, too, are getting a taste of just how destructive deepfakes can be.

Perhaps the most notable (and widely covered) celebrity deepfake incident happened in January when sexually explicit, AI-generated pictures of Taylor Swift popped up on social media. Admins on X/Twitter, where the deepfaked images spread like wildfire, eventually blocked searches for the images but not before they garnered nearly 50 million views.

Pornongraphic celebrity deepfakes aren’t a new phenomenon. As early as 2017, Reddit users were superimposing the faces of popular actresses—such as Scarlett Johansson and Gal Gadot—onto porn performers. But AI technology back then was nowhere near where it is today. Discerning users could spot a poorly rendered face-swap and determine a video or image was fake.

Shortly after the Taylor Swift fiasco, US senators proposed a bill that enables victims of AI-generated deepfakes to sue the videos’ creators—long overdue considering a 2019 report found that non-consensual porn comprised 96 percent of all deepfake videos.

Deepfaking the finservs

Whether it’s hacking elections, spreading pornographic celebrity deepfakes, or posing as a company’s CFO, deepfakes have never been more convincing or dangerous. And, because fraudsters want the most bang for their buck, naturally they’re inclined to attack those with the most bucks: banks, fintech companies, and other financial institutions.

The $25 million CFO deepfake speaks to just how severe these cases can be for finservs, though most deepfaking fraudsters prefer a measured approach that spans weeks or months. Such is the M.O. of  SuperSynthetic™ “sleeper” identities. This newest species of synthetic fraudster is too crafty to settle for a brute-force offensive. Instead, it leverages an aged and geo-located identity that’s intelligent enough to make occasional deposits and interact with a banking website or app for an extended period to appear like a genuine customer.

However, SuperSynthetics achieving their long-awaited goal—accept a credit card or loan offer, cash out, and scram—is contingent on one vital step: passing the onboarding process.

This is where deepfakes come in. During onboarding, SuperSynthetics can deepfake driver’s licenses and other forms of ID, even live video interviews if need be. Given the advancement in deepfake technology, and the unreliability of liveness detection, the only real chance banks have is to stop SuperSynthetic identities before they’re onboarded.

Using a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence, preemptively sniffing out SuperSynthetics is indeed possible. This is the foundation of a “top-down” approach that analyzes synthetic identities collectively—different from the one-by-one approach of the olden days. A bird’s eye view of identities uncovers signature online behaviors and patterns consistent enough to rule out a false positive. Multiple identities depositing money into their checking account every Wednesday at 9:27 p.m.? Something’s afoot.

The top-down approach is the surest and fastest way banks can ferret out synthetic identities and avoid getting deepfaked at the onboarding stage. But the clock is ticking. A study, commissioned by Deduce, found more than 75% of finservs already had synthetic identities in their databases, and 87% had extended credit to fake accounts.

Bank vs. Deepfake clearly isn’t a fair fight. But if banks do their work early, and subsequently avoid deepfakes altogether, their customers, reputations, and bottom lines will be the better for it.

Get ahead, or get left behind

New technology gets the people going. Just ask the folks coughing up a fair sum of cash for an Apple Vision Pro. Sure, these users may look like Splinter Cell operatives with their VR goggles on but, most likely, Apple’s foray into “spatial computing” will take off sooner rather than later.

However, before everyday users and even large enterprises can adopt new technologies, another category of users is way ahead of them: fraudsters. These proactive miscreants adopt the latest tech and find new ways to victimize companies and their customers. Think metaverse and crypto fraud or, most recently, the use of generative AI to create legions of humanlike bots.

Look back through the decades and a clear pattern emerges: new tech = new threat. Phishing, for example, was the offspring of instant messaging and email in the mid-1990s. Even the “advance fee” or “Nigerian Prince” scam we associate with our spam folders originally cropped up in the 1920s due to breakthroughs in physical mail.

What can we learn from studying this troubling pattern? How can businesses adopt the latest wave of nascent technologies while protecting themselves from opportunistic fraudsters? In answering these questions, it’s helpful to examine the major technological advancements of the past 20+ years—and how bad actors capitalized at every step along the way.

The 2000s

The 2000s ushered in digital identities and, by extension, digital identity fraud.

Web 1.0 and the internet had exploded by the early aughts. PCs, e-commerce, and online banking increased the personal data available on the web. As more banks transitioned to online, and digital-only banks emerged, fintech companies like PayPal hit the ground running and online transactions skyrocketed. Fraudsters pounced on the opportunity. Phishing, Trojan horse viruses, credential stuffing, and exploiting weak passwords were among the many tricks that fooled users and led to breaches at notable companies and financial institutions.

An example of a Nigerian Prince or “419” email scam

Phishing scams, in which bogus yet legitimate-looking emails persuade users to click a link and input personal info, took off in the 2000s and are even more effective today. Thanks to AI, including A-based tools like ChatGPT, phishing emails are remarkably sophisticated, targeted, and scalable.

Social media entered the frame in the 2000s, too, which opened a Pandora’s box of online fraud schemes that still persist today. The use of fake profiles provided another avenue for phishing and social engineering that would only widen with the advent of smartphones.

The 2010s

The 2010s were all about the cloud. Companies went gaga over low-cost computing and storage solutions, only to go bonkers (or broke) due to the corresponding rise in bot threats.

By the start of the decade, Google, Microsoft, and AWS were all-in on the cloud. The latter brought serverless computing to the forefront at the 2014 re:Invent conference, and the two other big-tech powerhouses followed suit. Then came the container-sance, the release of Docker and Kubernetes, the mass adoption of DevOps and hybrid and multicloud and so on. But, in addition to their improved portability and faster deployment, containers afforded bad actors (and their bots) another attack surface.

AWS unveils Lambda (and serverless computing) at re:Invent 2014

The rise of containers, cloud-native services, and other cloudy tech in the 2010s led to a boom in innovation, efficiency, and affordability for enterprises—and for fraudsters. Notably, the Mirai botnet tormented global cloud services companies using unprecedented DDoS (distributed denial of service) attacks, and the 3ve botnet accrued $30 million in click-fraud over a five-year span.

Malicious bots had never been cheaper or more scalable, brute force and credential stuffing attacks more seamless and profitable. The next tech breakthrough would catapult bots to another level of deception.

The 2020s

AI has blossomed in the 2020s, especially over the past year, and once again fraudsters have flipped the latest technological craze into a cash cow.

Amid the ongoing AI explosion, bad actors have specifically leveraged Generative AI and self-learning identity personalization to line their pockets. It’s hard to say what’s scarier—how human these bots appear, or how easy it is for novice users to create them. The widespread availability of data and AI’s capacity to teach itself using LLMs (large language models) has spawned humanlike identities at massive scale. Less technical fraudsters can easily build and deploy these identities thanks to tools like WormGPT, otherwise known as “ChatGPT’s malicious cousin.”

SuperSynthetic identities represent the next step in bot evolution

The most nefarious offshoot of AI’s golden age may be SuperSynthetic™ identities. The most humanlike of the synthetic fraud family tree, SuperSynthetics are all about the long con and don’t mind waiting several months to cash out. These identities, which can deepfake their way past account verification if need be, are realistically aged and geo-located with a legit credit history to boot, and they’ll patiently perform the online banking actions of a typical human to build trust and credit worthiness. Once that loan is offered, the SuperSynthetic lands its long-awaited reward. Then it’s on to the next bank.

Like Web 1.0 and cloud computing before it, AI’s superpowers have amplified the capabilities of both companies and the fraudsters who threaten their users, bottom lines and, in some cases, their very existence. This time around, however, the threat is smarter, more lifelike, and much harder to stop.

What now?

There’s undoubtedly a positive correlation between the emergence of technological trends and the growth of digital identity fraud. If a new technology hits the scene, fraudsters will exploit it before companies know what hit them.

Rather than getting ahead of the latest threats, many businesses are employing outdated mitigation strategies that woefully overlook the SuperSynthetic and stolen identities harming their pocketbooks, users, and reputations. Traditional fraud prevention tools scrutinize identities individually, prioritizing static data such as device, email, IP address, SSN, and other PII data. The real solution is to analyze identities collectively, and track dynamic activity data over time. This top-down strategy, with a sizable source of real-time, multicontextual identity intelligence behind it, is the best defense against digital identity fraud’s most recent evolutionary phase.

It’s not that preexisting tools in security stacks aren’t needed; it’s that these tools need help. At last count, the Deduce Identity Graph is tracking nearly 28 million synthetic identities in the US alone, including nearly 830K SuperSynthetic identities (a 10% increase from Q3 2023). If incumbent antifraud systems aren’t fortified, and companies continue to look at identities on a one-to-one basis, AI-generated bots will keep slipping through the cracks.

New threats require new thinking. Twenty years ago phishing scams topped the fraudulent food chain. In 2024 AI-generated bots rule the roost. The ultimatum for businesses remains the same: get ahead, or get left behind.

Synthetic customers are there, even if you don’t see them

There’s no denying that customer data platforms (CDPs) are a must-have tool for today’s companies. Consolidating customer data into one location is much more manageable. Aside from data privacy considerations—particularly in finance and healthcare—a CDP’s organized, streamlined view of customer data activates personalized user experiences and offers for existing customers while accurately identifying prospective customers who are most likely to drive revenue.

But synthetic fraud, which now accounts for 85% of all identity fraud, is infesting the tidiest and most closely monitored of CDPs. Most CDPs scan for telltale signs of fraud in real-time; however, synthetic fraudsters are too smart for that. The ubiquity of AI, and its ever-growing intelligence, enables bad actors to create and manipulate synthetic identities that appear more human than ever. The signs of fraud aren’t so obvious anymore, and the cybersecurity tools used by many companies aren’t up to snuff.

Effectively stomping out synthetic identity fraud requires an obsessive degree of CDP hygiene. This, of course, isn’t possible without a thorough understanding of what synthetic identities are capable of, how they operate, and the strategy companies must adopt to neutralize them.

Silent killers

No intelligence agency wants to readily admit it’s been infiltrated by a spy, and no CEO is exactly chomping at the bit to admit their company’s customer database is crawling with fake customers. When PayPal’s then-CEO, Dan Schulman, admitted to over 4 million fake customers it cost the fintech company over 25% in market capitalization. But these fraudsters are indeed there, camped out in CDPs and operating like legitimate customers—deposits, withdrawals, credit services, the whole nine.

A recent Wakefield report surveyed 500 senior fraud and risk professionals from the US. More than 75% of these executives said they had synthetic customers. Half of respondents deemed their company’s synthetic fraud prevention efforts somewhat effective, at best.

Perhaps most troubling? 87% of these companies admitted to extending credit to synthetic customers, and 53% of the time credit was extended proactively, via a marketing campaign, to the fraudster. These fraudsters aren’t just incredibly human-like and patient—they’re in it for the big haul. And according to the FTC’s 2022 report on identify fraud, the per-incident financial impact is in excess of $15K. 

Synthetic Sleeper identities, as we call them, can remain in CDPs for months, in some cases over a year. They deposit small amounts of money here and there while interacting with the website or mobile app like a real customer would. Once their credit worthiness gets a bump, and they qualify for a loan or line of credit, pay day is imminent. The fraudster performs a “bust-out,” or “hit-and-run.” The money is spent, and the bank is left with uncollectible debt.

This is not your grandmother’s synthetic identity. Such intelligence and cunning is the handiwork of synthetic fraud’s latest iteration: the SuperSynthetic™ identity.

SuperSynthetic, super slippery

How are synthetic fraudsters turning CDPs into their own personal clubhouses? Look no further than SuperSynthetic identities. The malevolent offspring of the ongoing generative AI explosion, SuperSynthetics are growing exponentially. In Deduce’s most recent Index, 828,095 SuperSynthetic identities are being tracked in the identity graph. These are hitting companies, especially banks, with costly smash-and-grabs at an unprecedented rate.

SuperSynthetics aren’t high on style points, but why opt for a brute force approach if you don’t need to? These methodical fraudsters are more than content playing the long game. Covering all of their bases allows for such patience—their credit history is legit; their identity is realistically aged and geo-located; and, for good measure, they can deepfake their way past selfie, video, or document verification.

Even the sharpest of real-time fraud detection solutions are unlikely to catch a SuperSynthetic. The usual hallmarks—an IP address or credit card being used for multiple accounts, behavioral changes over time—aren’t present. A SuperSynthetic is far too pedestrian to raise eyebrows, depositing meager dollar amounts over several months, regularly checking its account balance, paying bills and otherwise transacting innocuously until, finally, its reputation earns a credit card or loan offer.

Once the loan is transferred, or the credit card is acquired, it’s sayonara. The identity cashes out and moves on to the next bank. After all, the fraudster does not care about their credit score for that identity, one of dozens or hundreds they are manipulating. It has done its job and will be sacrificed for a highly profitable return.

Fake identities, real problems

Deduce estimates that 3-5% of financial services and fintech new accounts onboarded within the past two years are SuperSynthetic identities. Failing to detect these sleeper identities in a CDP hurts companies in a multitude of ways, all of which tie back to the bottom line.

Per the Wakefield report, 20% of senior US fraud and risk execs say synthetic fraud incidents rack up between $50K-$100K per incident. 23% put the number at $100K+. The low end of this range sitting at a whopping $50K should be alarming enough to reconsider preemptive counter measures against CDP breaches.

Another downside of synthetic infiltration is algorithm poisoning. Since the data for synthetic “customers” is inherently fake, this skews the models that drive credit decisioning. Risky applicants can be mistakenly offered loans, or vice versa. For banks, financial losses from algorithm poisoning are two-fold: erroneously extending credit to fake or unworthy customers; and bungling opportunities to extend credit to the right customers.

A signature approach

The good news for financial services organizations (and their CDPs) is the battle against synthetic, and even SuperSynthetic, identities is not a futile one. The same strategy that’s effective in singling out synthetic identities pre-NAO (New Account Opening) can help spot synthetics that have already breached CDPs.

Even if a SuperSynthetic has already bypassed fraud detection at the account opening stage, gathering identity activity from before, during, and after the NAO workflow and analyzing identities collectively, rather than one-by-one, unearths SuperSynthetic behavioral patterns.

Traditional fraud prevention tools take an individualistic approach, doubling down on static data such as device, email, IP address, for singular identities. But catching synthetic fraudsters, pre- or post-NAO, calls for tracking dynamic activity data over time. At a high level (literally), this translates to a top-down, or “birdseye,” strategy—powered by an enormous and scalable source of real-time, multicontextual identity intelligence—that verifies identities as a group or signature. Any other plan of attack is doubtful to pick up the synthetic scent.

Per the slide above, a unique activity-backed data set augments the data from a CDP and fraud platform to ferret out synthetic accounts. To catch these slithery fraudsters more data can and should be deployed. Knowing how an identity behaved online prior to becoming a customer bolsters the data science models used to give CDPs a synthetic spring cleaning.

What does this look like in practice? Say a real-time scan of in-app customer activity reveals, over an extended period, that multiple identities check their account balance every Thursday at exactly 8:17 a.m. Patterns such as this rule out coincidence and uncover the otherwise clandestine footprints of SuperSynthetic identities.

The intelligence and elusiveness of SuperSynthetics are increasing at a breakneck pace. In addition to terrorizing CDPs, SuperSynthetics have the potential to peddle sports betting accounts, carry out financial aid scams, and even swing the stock market via disinformation campaigns. Given what’s at stake, not combating SuperSynthetics with a thorough activity-driven approach, for some companies, might spell serious trouble in the year ahead.

College students are lifelong learners. So are AI-powered fraudsters.

With each passing day AI grows more powerful and more accessible. This gives fraudsters the upper hand, at least for now, as they roll out legions of AI-powered fake humans that even governmental countermeasures—such as the Biden administration’s recent executive order—will be lucky to slow down.

Among other nefarious activities, bad actors are leveraging AI to peddle synthetic bank and online sports betting accounts, swing elections, and spread disinformation. They’re also fooling banks with another clever gimmick: posing as college freshmen.

College students, particularly underclassmen, have long been a target demographic for banks. Fraudsters are well aware and know that banks’ yearning for customer acquisition, coupled with their inadequate fraud prevention tools, present an easy cash-grab opportunity (and, perhaps, a chance to revisit their collegiate years).

Early bank gets the bullion

The appeal of a new college student from a customer acquisition perspective can’t be understated.

A young, impressionable kid is striking out on their own for the first time. They need a credit card to pay for both necessary and unnecessary things (mostly the latter). They need a bank. And their relationship with that bank? There’s a good chance it will outlast most of their romantic relationships.

This could be their bank through college, through their working years, the bank they procure a loan from for their first house, the bank they encourage their kids and grandkids to bank with. In a college freshman banks don’t just land one client, but potentially an entire generation of clients. Lifetime value up the wazoo.

Go to any college move-in day and you’ll spot bank employees at tables, using giveaway gimmicks to attract students to open up new accounts. According to the Consumer Financial Protection Bureau, 40% of students attend a college that’s contractually linked to a specific bank. However, as banks shovel out millions so they can market their products at universities, a fleet of synthetic college freshmen lie in wait, with the potential to collectively steal millions of their own.

Playing the part

Today’s fraudsters are master identity-stealers who can dress up synthetic identities to match any persona.

In the case of a fake college freshman, building the profile starts off in familiar fashion: snagging a dormant social security number (SSN) that’s never been used or hasn’t been used in a while. Like many forms of Personally Identifiable Information (PII), stolen SSNs from infants or deceased individuals are readily available on the dark web.

From here, fraudsters can string together a combination of stolen and made-up PII to create a synthetic college freshman identity that qualifies for a student credit card. No branch visit necessary, and IDs can be deepfaked. The synthetic identity makes small purchases and pays them off on time—food, textbooks, phone bill—building trust with the bank and improving their already respectable credit score of around 700. They might sign up for an alumni organization and/or apply for a Pell Grant to further solidify their collegiate status.

Pell Grants, of course, require admission to a college—a process that, similar to acquiring a credit card from a bank, is easy pickings for synthetic fraudsters.

The ghost student epidemic

Any bank that doesn’t take the synthetic college freshman use case seriously should study the so-called “ghost student” phenomenon: fake college enrollees that rob universities of millions. 

In California, these synthetic students, who employ the same playbook as bank-swindling synthetics, comprise 20% of community college applications alone (more than 460K). Thanks to an increased adoption of online enrollment and learning post-pandemic, relaxed verification protocols for household income, and the proliferation of AI-powered fake identities, ghost students can easily grab federal aid and never have to attend class.

Like ghost students, synthetic college freshmen can apply for a credit card without ever stepping foot inside a bank branch. Online identity verification is a breeze for the seasoned bad actor. Given the democratization of powerful generative AI tools, ID cards and even live video interviews over Zoom or another video client can be deepfaked.

A (SuperSynthetic) tale old as time

Both the fake freshmen and ghost student problems are symptomatic of a larger issue: SuperSynthetic™ identities.

SuperSynthetic bots are the most sophisticated yet. Forget the brute force attacks of yore; SuperSynthetics are incredibly lifelike and patient. These identities play nice for several months or even years, building trust by paying off credit card transactions on time and otherwise interacting like a real human would. But, once the bank offers a loan and a big payday is in sight, that SuperSynthetic is out the door.

An unorthodox threat like a SuperSynthetic identity can’t be thwarted by traditional fraud prevention tools. Solutions reliant on individualistic, static data won’t cut it. Instead, banks (and universities, in the case of ghost students) need a solution powered by scalable and dynamic real-time data. The latter approach verifies identities as a group or signature: the only way to pick up on the digital footprints left behind by SuperSynthetics.

As human as SuperSynthetic identities are, they aren’t completely infallible. With a “birds eye” view of identities, patterns of activities—such as SuperSynthetics commenting on the same website at the exact same time every week over an extended period—quickly emerge.

Fake college students are one of the many SuperSynthetic personas capable of tormenting banks. But it isn’t the uphill battle it appears to be. If banks change their fraud prevention philosophy and adopt a dynamic, birds eye approach, they can school SuperSynthetics in their own right.

Synthetic fraud remains the elephant in the room

The Biden administration’s recent executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” naturally caused quite a stir among the AI talking heads. The security community also joined the dialog and expressed varying degrees of confidence in the executive order’s ability to protect the federal government and private sector against bad actors.

Clearly, any significant effort to enforce responsible and ethical AI use is a step in the right direction, but this executive order isn’t without its shortcomings. Most notable is its inadequate plan of attack against synthetic fraudsters—specifically those created by Generative AI.

With online fraud reaching a record $3.56 billion through the first half of 2022 alone, financial institutions are an obvious target of AI-based synthetic identities. A Wakefield report commissioned by Deduce found that 76% of US banks have synthetic accounts in their database, and a whopping 86% have extended credit to synthetic “customers.”

However, the shortsightedness of the executive order also carries with it a number of social and political ramifications that stretch far beyond dollars and cents.

Missing the (water)mark

A key element of Biden’s executive order is the implementation of a watermarking system to differentiate between content created by humans and AI, a topical development in the wake of the SAG-AFTRA strike and the broader artist-versus-AI clash. Establishing provenance of an object via a digital image or signature would seem like a sensible enough solution to identifying AI-generated content and synthetic fraud, that is, if all of the watermarking mechanisms currently at our disposal weren’t utterly unreliable.

A University of Maryland professor, Soheil Feizi, as well as researchers at Carnegie Mellon and UC Santa Barbara, circumvented watermarking verification by adding fake imagery. They were able to remove watermarks just as easily.

It’s also worth noting that the watermarking methods laid out in the executive order were developed by big tech. This raises concerns around a walled-garden effect in which these companies are essentially regulating themselves while smaller companies follow their own set of rules. And don’t forget about the fraudsters and hackers who, of course, will gladly continue using unregulated tools to commit AI-powered synthetic fraud, as well as overseas bad actors who are outside US jurisdiction and thus harder to prosecute.

The deepfake dilemma

Another element of many synthetic fraud attacks, deepfake technology, is addressed in the executive order but a clear-cut solution isn’t proposed. Deepfaking is as complex and democratized as ever—and will only grow more so in the coming years—yet the executive order falls short of recommending a plan to continually evolve and keep pace.

Facial recognition verification is employed at the government and state level, but even novice bad actors can use AI to deepfake their way past these tools. Today, anyone can deepfake an image or video with a few taps. Apps like FakeApp can seamlessly integrate someone’s face into an existing video, or generate an entirely new one. As little as a cropped face from a social media image can spawn a speaking, blinking, head-moving entity. Uploaded selfies and live video calls pass with flying colors.

In this era of remote customer onboarding, coinciding with unprecedented access to deepfake tools, it behooves executive orders and other legislation to offer a more concrete solution to deepfakes. Finservs (financial services) companies are in the crosshairs, but so are social media platforms and their users; the latter poses its own litany of dangers.

Synthetic fraud: multitudes of mayhem

The executive order’s watermarking notion and insufficient response to deepfakes don’t squelch the multibillion-dollar synthetic fraud problem.

Synthetic fraudsters still have the upper hand. With Generative AI at their disposal, they can create patient and incredibly lifelike SuperSynthetic™ identities that are extremely difficult to intercept. Worse, “fraud-as-a-service” organizations peddle synthetic mule accounts from major banks, and also sell synthetic accounts on popular sports betting sites—new, aged, geo-located—for as little as $260.

More worrisome, amid the rampant spread of disinformation online, is the potential for synthetic accounts to cause social panic and political upheaval.

Many users struggle to identify AI-generated content on X (formerly Twitter), much less any other platform, and social networks charging a nominal fee to “verify” an account offers synthetic identities a cheap way to appear even more authentic  All it takes is one post shared hundreds of thousands or millions of times for users to mobilize against a person, nation, or ideology. A single doctored image or video could spook investors, incite a riot, or swing an election. 

“Election-hacking-as-a-service” is indeed another frightening offshoot of synthetic fraud, to the chagrin of politicians (or those on the wrong side of it, at least). These fraudsters weaponize their armies of AI-generated social media profiles to sway voters. One outfit in the Middle East interfered in more than 33 elections.

Banks or betting sites, social uprisings or rigged elections, unchecked synthetic fraud, buttressed by AI, will continue to wreak havoc in multitudinous ways if it isn’t combated by an equally intelligent and scalable approach.

The best defense is a good offense

The executive order, albeit an encouraging sign of progress, is too vague in its plan for stopping AI-generated content, deepfakes, and the larger synthetic fraud problem. The programs and tools it says will find and fix security vulnerabilities aren’t clearly identified. What do these look like? How are they better than what’s currently available?

AI-powered threats grow smarter by the second. Verbiage like “advanced cybersecurity program” doesn’t say much; will these fraud prevention tools be continually developed so they’re in lockstep with evolving AI threats? To its credit, the executive order does mention worldwide collaboration in the form of “multilateral and multi-stakeholder engagements,” an important call-out given the global nature of synthetic fraud.

Aside from an international team effort, the overarching and perhaps most vital key to stopping synthetic fraud is an aggressive, proactive philosophy. Stopping AI-generated synthetic and SuperSynthetic identities requires a preemptive, not reactionary, approach. We shouldn’t wait for authenticated—or falsely authenticated—content and identities to show up, but rather stop synthetic fraud well before infiltration can occur. And, given the prevalence of synthetic identities, they should have a watermark all their own.

76% of finservs are victims of synthetic fraud

In 1938, Orson Welles’ infamous radio broadcast of The War of the Worlds convinced thousands of Americans to flee their homes for fear of an alien invasion. More than 80 years later, the public is no less gullible, and technology unfathomable to people living in the 1930s allows fake humans to spread false information, bamboozle banks, and otherwise raise hell with little to no effort.

These fake humans, also known as synthetic identities, are ruining society in myriad ways: tampering with electorate polls and census data, disseminating misleading social media posts with real-world consequences, sharing fake articles on Reddit that subsequently skew Large Language Models that drive platforms such as ChatGPT. And, of course, bad actors can leverage fake identities to steal millions from financial institutions.

The bottom line is this: synthetic fraud is prevalent; financial services companies (finservs), social media platforms, and many other organizations are struggling to keep pace; and the impact, both now and in the future, is frighteningly palpable.

Here is a closer look at how AI-powered synthetic fraud is infiltrating multiple facets of our lives.

Accounts for sale

If you need a new bank account, you’re in luck: obtaining one is as easy as buying a pair of jeans and, in all likelihood, just as cheap.

David Maimon, a criminologist and Georgia State University professor, recently shared a video from Mega Darknet Market, one of the many cybercrime syndicates slinging bank accounts like Girl Scout Cookies. Mega Darknet and similar “fraud-as-a-service” organizations peddle mule accounts from major bank brands (in this case Chase) that were created using synthetic identity fraud, in which scammers combine stolen Personally Identifiable Information (PII) with made-up credentials.

But these cybercrime outfits take it a step further. With Generative AI at their disposal, they can create SuperSyntheticTM identities that are incredibly patient, lifelike, and difficult to catch.

Aside from bank accounts, fraudsters are selling accounts on popular sports betting sites. The verified accounts—complete with name, DOB, address, and SSN—can be new or aged and even geo-located, with a two-year-old account costing as little as $260. Perfect for money launderers looking to wash stolen cash.

Fraudsters are selling stolen bank accounts as well as stolen accounts from sports betting sites.

Cyber gangs like Mega Darknet also offer access to the very Generative AI tools they use to create synthetic accounts. This includes deepfake technology which, besides fintech fraud, can help carry out “sextortion” schemes.

X-cruciatingly false

Anyone who’s followed the misadventures of X (formerly Twitter) over the past year, or used any social media since the late 2010s, knows that Elon’s embattled platform is a breeding ground for bots and misinformation. Generative AI only exacerbates the problem.

A recent study found that X users couldn’t distinguish AI-generated content (GPT-3) from human-generated content. Most alarming is that these same users trusted AI-generated posts more than posts from real humans.

In the US, where 20% of the population famously can’t locate the country on a world map, and elsewhere these synthetic accounts and their large-scale misinformation campaigns pose myriad risks, especially if said accounts are “verified.” It wouldn’t take much to incite a riot, or stoke anger and subsequent violence toward a specific group of people. How about sharing a bogus picture of an exploded Pentagon that impacts the stock market? Yep. That, too.

This fake image of an explosion near the Pentagon exemplifies the danger of synthetic accounts spreading misinformation.


Few topics are more timely and can rile up users like election interference, another byproduct of the fake human—and fake social media—epidemic. Indeed, the spreading of false information in service of a particular political candidate or party existed well before social media, but now the stakes have increased exponentially.

If fraud-as-a-service isn’t ominous-sounding enough, election-hacking-as-a-service might do the trick. Groups with access to armies of fake social media profiles are weaponizing disinformation to sway elections any which way. Team Jorge is just one example of these election meddling units. Brought to light via a recent Guardian investigation, Team Jorge’s mastermind Tal Hanan claimed he manipulated upwards of 33 elections.

The rapid creation and dissemination of fake social media profiles and content is far more harmful and widespread with Generative AI in the fold. Flipping elections is one of the worst possible outcomes, but grimmer consequences will arise if automated disinformation isn’t thwarted by an equally intelligent and scalable solution.

Finservs in the crosshairs

Cash is king. Synthetic fraudsters want the biggest haul, even if it’s a slow-burn operation stretched out over a long period of time. Naturally, that means finservs, who lost nearly $2 billion to bank transfer or payment fraud last year, are number one on their hit list. 

Most finservs today don’t have the tools to effectively combat AI-generated synthetic and SuperSynthetic fraud. First-party synthetic fraud—fraud perpetrated by existing “customers”—is rising thanks to SuperSynthetic “sleeper” identities that can imitate human behavior for months before cashing out and vanishing at the snap of a finger. SuperSynthetics can also use deepfake technology to evade detection, even if banks request a video interview during the identity verification phase.

It’s not like finservs are dilly-dallying. In a study from Wakefield, commissioned by Deduce, 100% of those surveyed had synthetic fraud prevention solutions installed along with sophisticated escalation policies. However, more than 75% of finservs already had synthetic identities in their customer databases, and 87% of those respondents had extended credit to fake accounts.

Fortunately for finservs and others trying to neutralize synthetic fraud, it’s not impossible to outsmart generative AI. With the right foundation in place—specifically a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence—and a change in philosophy, even a foe that grows smarter and more humanlike by the second can be thwarted.

This philosophical change is rooted in a top-down, bird’s-eye approach that differs from traditional, individualistic fraud prevention solutions that examine identities one by one. A macro view, on the other hand, sees identities collectively and groups them into a single signature which uncovers a trail of digital footprints. Behavioral patterns such as social media posts and account actions rule out coincidence. The SuperSynthetic smokescreen evaporates.

Whether it’s bad actors selling betting accounts, social media platforms stomping out disinformation, or finservs protecting their bottom lines, fake humans are more formidable than ever with generative AI and SuperSynthetic fraud at their disposal. Most companies seem to be aware of the stakes, but singling out bogus users and SuperSynthetics requires a retooled approach. Otherwise, revenue, users, and brand reputations will dwindle, and the ways in which fake accounts wreak havoc will multiply.

That rise in first-party synthetic fraud is no fluke. You have a SuperSynthetic identity problem.

Online fraud in the US totaled a record-breaking $3.56 billion through the first half of last year. Most consumer-facing companies have done the sensible thing and spent six or seven figures fortifying their perimeter defenses against third-party fraud.

But another effective, and seemingly counterintuitive, strategy for stopping today’s fraudsters is to think inside-out, not just outside-in. In other words, first-party synthetic fraud—or fraud perpetrated by existing “customers”—is threatening bottom lines in its own right, by way of AI–generated synthetic “sleeper” identities that play nice for months before executing a surprise attack.

Banks and other finserv (financial services) companies shouldn’t be surprised if their first-party synthetic fraud is off the charts. Deduce estimates that between 3-5% of new customers acquired in the past year are actually synthetic identities, specifically SuperSyntheticTM identities, created using generative AI.

The good news is that a simple change in philosophy will go a long way in neutralizing synthetic first-party fraudsters before they’re offered a loan or credit card.

First-party problems

Third-party fraud is when bad actors pose as someone else. It’s your classic case of identity theft. They leverage stolen credit card info and/or other credentials, or combine real and fake PII (Personal Identifiable Information) to create a synthesized identity, for financial or material gain. Consequently, the victims whose identities were stolen notice fraudulent transactions on their bank statements, or debt collectors track them down, and it’s apparent they’ve been had.

First-party synthetic fraud is even more cunning—and arguably more frustrating—because the account information and activity appear genuine, complicating the fraud detection process. The aftermath is where it hurts the most. Since, unlike third-party fraud, there isn’t an identifiable victim, finservs have no one to collect the debt from and are forced to bite the bullet.

Image Credit: Experian

One hallmark of first-party synthetic fraud is its patience. These sleeper identities appear legitimate for months, sometimes more than a year, making small deposits every now and then while interacting with the website or app like a real customer. Once they bump up their credit worthiness score and qualify for a loan or line of credit, it’s game over. The fraudster executes a “bust-out,” or “hit-and-run,” spending the money and leaving the bank with uncollectible debt.

This isn’t the work of your average synthetic identity. Such a degree of calculation and human-like sophistication can only be attributed to SuperSynthetic identities.

That escalated quickly

An Equifax report found that nearly two million consumer credit accounts, over the span of a year, were potentially synthetic identities. More than 30% of these accounts represented a major delinquency risk with cases averaging $8K-10K in losses.

The blame for rising first-party synthetic fraud—and the finservs left in its wake—can be placed squarely on the shoulders of SuperSynthetic identities. These AI-generated bots are proliferating worldwide, scaling their sleeper networks to execute bust-outs on a grand scale.

SuperSynthetics—featuring a three-pronged attack of synthetic identity fraud, legitimate credit history, and deepfake technology—need not brute-force their way into a bank’s pockets. Aside from a SuperSynthetic’s patient approach and aged, geo-located identity, its deepfake capability, a benefit of the recent generative AI explosion, is key to securing the long-awaited loan or credit card.

Selfie verification? A video interview? No problem. Deepfake tools, some of them free, are advanced enough to trick finservs even if they have liveness detection in their stack. Document verification? There’s a deepfake for that, too.

SuperSynthetics don’t have a kryptonite, per se. But analyzing identities from a different angle boosts the chances of a finserv spotting SuperSynthetics before they can circumvent the loan or credit verification stage.

Dusting for fingerprints

If finservs want to sniff out SuperSynthetic identities and successfully combat first-party synthetic fraud, they can’t be afraid of heights.

A top-down, bird’s-eye view is the best way to uncover the digital fingerprints or signatures of SuperSynthetics. Individualistic fraud prevention tools overlook these behavioral patterns, but a macro approach, which studies identities collectively, illuminates forensic evidence like a black light.

A top-down view reveals digital fingerprints that otherwise would go undetected.

Grouping identities into a single signature—and examining them alongside millions of fraudulent identities—reveals indisputable evidence of SuperSynthetic activities such as social media posts and account actions that consistently happen at the exact day and time each week by a group or signature of identities. Coincidence is out of the question.

Of course, not every finserv has the firepower to adopt this strategy. In order to enable a big-picture view, companies’ anti-fraud stacks need a large and scalable source of real-time, multicontextual, activity-backed identity intelligence.

There are other avenues. Consider, for example, the only 100-percent foolproof solution to first-party synthetic fraud: in-person identity verification. Even if this approach was used exclusively at the pre-loan juncture it seems unlikely that many companies would take on the added friction, though driving down to the bank is a small price to pay for a five or ten thousand-dollar loan.

If finservs don’t wish to revisit the good old days of face-to-face verification, the top-down, signature approach is the only other viable deterrent to first-party synthetic fraud. Solutions that analyze identities one by one won’t stop SuperSynthetics before a loan or credit card is granted, and by that point it’s already over.

An old-school approach could be the answer for finservs

For many people, video conferencing apps like Zoom made work, school, and other everyday activities possible amid the global pandemic—and more convenient. Remote workers commuted from sleeping position to upright position. Business meetings resembled “Hollywood Squares.” Business-casual meant a collared shirt up top and pajama pants down low.

Fraudsters were also quite comfortable during this time. Unprecedented amounts of people sheltering in place naturally caused an ungodly surge in online traffic and a corresponding increase in security breaches. Users were easy prey, and so were many of the apps and companies they transacted with.

In the financial services (finserv) sector, branches closed down and ceased face-to-face customer service. Finserv companies relied on Zoom for document verification and manual reviews, and bad actors, armed with stolen credentials and improved deepfake technology, took full advantage.

Even in the face of AI-Generated identity fraud most finservs still use remote identity verification to comply with regulator KYC requirements, and when it comes time to offer a loan. It’s easier than meeting in person, and what customer doesn’t prefer verifying their identity from the comfort of their couch?

But AI-powered synthetic identities are getting smarter and, while deepfake deterrents are closing the gap, a return to an old-school approach remains the only foolproof option for finservs.

Deepfakes, and the SuperSynthetic™ quandary

Gen AI platforms such as ChatGPT and Bard, coupled with their nefarious brethren FraudGPT and WormGPT and the like, are so accessible it’s scary. Everyday users can create realistic, deepfaked images and videos with little effort. Voices can be cloned and manipulated to say anything and sound like anyone. The rampant spread of misinformation across social media isn’t surprising given that nearly half of people can’t identify a deepfaked video.

More disturbing: deepfaked Mona Lisa, or that someone made this 3+ years ago?

Finserv companies are especially susceptible to deepfaked trickery, and bypassing remote identity verification will only get easier as deepfake technology continues to rapidly improve.

For SuperSynthetics, the new generation of fraudulent deepfaked identities, fooling finservs is quite easy. SuperSynthetics—a one-two-three punch of deepfake technology and synthetic identity fraud and legitimate credit histories—are more humanlike and individualistic than any previous iteration of bot. The bad actors who deploy these SuperSynthetic bots aren’t in a rush; they’re willing to play the long game, depositing small amounts of money over time and interacting with the website to convince finservs they’re prime candidates for a loan or credit application.

When it comes time for the identity verification phase, SuperSynthetics deepfake their documents, selfie, and/or video interview…and they’re in.

An overhaul is in order

Deepfake technology, which first entered the mainstream in 2018, is still relatively infantile yet pokes plenty of holes in remote identity verification.

The “ID plus selfie” process, as Gartner analyst Akif Khan calls it, is how most finservs are verifying loan and credit applicants these days. The user takes a picture of their ID or driver’s license, authenticity is confirmed, then the user snaps a picture of themselves. The system checks the selfie for liveness and makes sure the biometrics line up with the photo ID document. Done.

The process is convenient for legitimate customers and fraudsters alike thanks to the growing availability of free deepfake apps. Using these free tools, fraudsters can deepfake images of docs and successfully pass the selfie step, most commonly by executing a “presentation attack” in which their primary device’s camera is aimed at the screen of a second device displaying a deepfake.

Khan advocates for a layered approach to deepfake mitigation, including tools that detect liveness and check for certain types of metadata. This is certainly on point, but there’s an old-school, far less technical way to ward off deepfaking fraudsters. Its success rate? 100%.

The good ol’ days

Remember handshakes? How about eye contact that didn’t involve staring into a camera lens? These are merely vestiges of the bygone in-person meetings that many finservs used to hold with loan applicants pre-COVID.

Outdated, and less efficient, as face-to-face meetings with customers might be, they’re also the only rock-solid defense against deepfakes.

Not even advanced liveness detection is a foolproof deepfake deterrent.

Sure, the upper crust of finserv companies likely have state-of-the-art deepfake deterrents in place (i.e., 3D liveness detection solutions). But liveness detection doesn’t account for deepfaked documents or, more importantly, video, or the fact that the generative AI tools available to fraudsters are advancing just as fast as vendor solutions, if not faster. It’s a full-blown AI arms race, and with it comes a lot of question marks.

In-person verification (only for high-risk activities) puts these fears to bed. Is it frictionless? Obviously far from it, though workarounds, such as traveling notaries that meet customers at their residence, help ease the burden. But if heading down to a local branch for a quick meet-and-greet is what it takes to snag a $10K loan, will a customer care? They’d probably fly across state lines if it meant renting a nicer apartment or finally moving on from their decrepit Volvo.

Time to layer up

Khan’s recommendation, for finservs to assemble a superteam of anti-deepfake solutions, is sound, so long as companies can afford to do so and can figure out how to orchestrate the many solutions into a frictionless consumer experience. Vendors indeed have access to AI in their own right, powering tools that directly identify deepfakes through patterns, or that key in on metadata such as the resolution of a selfie. Combine these with the most crucial layer, liveness detection, and the final result is a stack that can at the very least compete against deepfakes.

SuperSynthetics aren’t as easy to neutralize. In previous posts, we’ve advocated for a “top-down” anti-fraud solution that spots these types of identities before the loan or credit application stage. Contrary to individualistic fraud prevention tools, this bird’s-eye view reveals digital fingerprints—concurrent account activities, simultaneous social media posts, etc.—that otherwise would go undetected.

In the meantime, it doesn’t hurt to consider the upside of an in-person approach to verifying customer identities (prior to extending a loan, not onboarding). No, it isn’t flashy, nor is it flawless. However, it is reliable and, if finservs effectively articulate the benefit to their customers—protecting them from life-altering fraud—chances are they’ll understand.