Deduce’s partnership with Alloy restores trust and stops AI-driven synthetic identities

If you haven’t heard, synthetic fraud is making lots of noise in finserv circles. In fact, synthetic identities—a cross-stitching of stolen PII (Personally Identifiable Information) and made-up data—are officially the fastest-growing fraudsters in the US.

Of course, this is mostly due to those two inescapable letters: AI. Equipped with automated, human-like intelligence, fake identities are both innumerable and too realistic for most fintech companies to reliably stop.

This only increases the pressure on banks and fintechs who struggle to differentiate between real and fake customers. Trust is at an all-time low. And who can blame them? Accepting fake customers leads to major fraud losses and costly KYC (Know Your Customer) violations.

Recently, Deduce partnered with Alloy to help banks and finservs resolve these trust issues. Alloy’s customizable, end-to-end risk management solution serves almost 600 fintechs and financial institutions. Stack Deduce’s real-time, scalable identity intelligence on top, and identity data orchestration is a cinch.

Here’s a closer look at the ongoing struggle with customer verification, and how the Deduce-Alloy partnership leads to easier decisions, less risk, more compliance, and substantial savings.

Major trust issues

Stolen and synthetic identities have duped social media platforms, the gig economy, elections, and even universities. But banks, unquestionably, are their preferred target. Powered by Gen AI, synthetic identities are projected to cost banks a whopping $40B by 2027.

The Frankenstein-esque combo of real and fake PII already makes synthetic identities a tough catch. Gen AI is the coup de grâce. Aside from the added scalability, intelligence, and ease of deployment, Gen AI arms synthetic fraudsters with deepfake capabilities and the ability to create authentic-looking digital legends. Whether it’s a photo, video, or audio, synthetics can look or sound disturbingly lifelike—just ask the Hong Kong bank exec who was swindled out of $25.6M on a single call.

Decisioning is tougher than ever in today’s banking climate. Under immense pressure, financial institutions are lassoing as many new customers as possible with promises of lower APR rates and other attractive offers. Ostensibly, this may generate positive results, but manual review costs and other issues say otherwise.

One community bank we spoke to saw enrollment jump 25% quarter-over-quarter, but manual reviews jumped from 12% to 20% (anything over 7% is uh-oh territory). These same community banks are often used by fraudsters to bolster “thin file” credit reports. They’ll apply for a subprime loan and faithfully repay it with the intent of increasing their FICO score above 700, so they can take out a bigger loan elsewhere.

These AI-driven synthetic fraudsters will go above and beyond to appear like the real deal. Clicking on acquisition ads to start the account opening process, engaging with chatbots on banking websites—all to validate their interest when it comes time to apply. Synthetic identities can shapeshift into whichever customer a bank prefers. For example, consider a marketing campaign that’s targeting college students in need of a loan or credit card. Synthetics will embody these personas to appear more like students: .edu emails, social media posts about college, alumni connections on LinkedIn, etc.

Most banks and finservs simply don’t have the dynamic identity data to differentiate between good and fraudulent applicants at a high level.

How Alloy evens the odds

Alloy’s end-to-end identity risk solution enables banks and finservs to quickly and accurately make decisions about onboarding, credit underwriting, and potential AML (anti-money laundering) cases.

In short, Alloy is an orchestration platform. Like a master composer, it deftly arranges the various parts of the customer journey—including the onboarding workflow—into one UX-traordinary symphony.

In our new synthetic reality, Alloy’s automated, on-point decision making helps mitigate risk, facilitate compliance, and minimize identity verification costs. Alloy’s global network of data vendors simplifies identity risk for financial institutions, from step-up authentication (DocV) to manual reviews and OTP (one-time passcode) challenges.

And don’t forget passive identity affirmation—that’s where Deduce comes in.

Deduce + Alloy: the ultimate decisioning duo

Deduce and Alloy provide intelligence to detect AI-driven identity fraud, verify
trusted customers, and reduce the number of costly step-up tasks.

Deduce provides Alloy customers the real-time identity insights needed to spot trusted customers and sniff out synthetic fraudsters. In anticipation of the AI-driven fraud boom, Deduce built the largest activity-backed identity graph for fraud and risk in the US.

The Deduce Identity Graph, as we like to call it, conducts multicontextual, real-time data forensics at scale. Here’s how Deduce and its Identity Graph handles trust issues for banks and fintechs:

  • Collects and analyzes real-time, activity-backed identity data from 1.5B+ daily authenticated events and 185M+ weekly users
  • Employs entity embedding, deep learning neural networks, graph neural networks and generalized classification to recognize fraudulent activity patterns 
  • Matches activities between identities under review and other identities
  • Spots new identity fraud threats as they emerge

Deduce also identifies traditional identity fraud while reducing false positives and secondary reviews: telltale signs of trust renewed.

Deduce’s partnership with Alloy is already paying dividends for banks (and their customers). One popular credit card rewards program relies on Deduce and Alloy to spot apps using stolen and synthetic identities without slowing down the user experience and triggering secondary reviews. In a competitive credit rewards landscape where seamless onboarding is the name of the game, reduced friction and verification costs are a must. Deduce and Alloy have it covered.

Identity verification you can bank on

Deduce knows your new customers very well. So well, in fact, that you can take it for a spin and see for yourself. Call it the “Deduce Passive Identity Affirmation Challenge.”

If you’re already an Alloy customer, it’s easy to see how effortlessly Deduce spots both trusted users and surefire fraudsters. Just follow these four easy steps:

1. Ask your Alloy representative about using Deduce.

2. Receive and enter your 30-day evaluation API key from Deduce.

3. Run your side-by-side, in-line evaluation against your current fraud prevention stack.

4. Compare the results. For each applicant Deduce trusts, what result did your existing solution provide? Did your solution’s result lead to increased friction, wasted step-up spend, or worse, application abandonment?

Not to spoil the fun, but at the end of your test drive expect Deduce to:

  • Know more than 91% of your new applicants
  • Label nearly 50% of new applicants as trusted, with 99.7% accuracy

Have any questions? Feel free to contact us. You can also find Deduce on the Alloy partner portal.

Synthetic fraud is plaguing the gig economy, sharing economy, & more

Halloween is still a couple of months away, but synthetic fraudsters are already getting in on the fun by cosplaying as real people.

Social media. Banks. Elections. Universities. Fake profiles and accounts have infiltrated nearly every facet of everyday life. Online fraud is up 20% this year, with stolen and synthetic fraud representing 85% of all cases.

The ubiquity of an Uber or DoorDash makes the gig economy another lucrative target for synthetic fraudsters. In fact, one in three users is a victim of fraud on gig economy apps. Other on-demand, app-based services—such as dating or home-sharing apps—are vulnerable as well.

What do these schemes look like across the gig, sharing, and dating app economies? Even by today’s standards, the scale and ingenuity behind such operations is impressive, but are these pesky synthetic fraudsters untouchable? Not quite.

Rideshare mafiosos

A recent WIRED feature revealed just how susceptible an Uber or Lyft is to synthetic fraud, assuming one possesses the drive, creativity, and even a novice grasp of technology.

Priscila Barbosa, the protagonist of the WIRED article, embodied these three attributes (and then some). After arriving at JFK International Airport in 2018 with two suitcases, $117 and, crucially, an iPhone, the Brazil native would soon realize her own perverse version of the American Dream—and send shockwaves through the entire gig economy ecosystem.

Capitalizing on loose identity verification protocols, Barbosa and her team of associates made a cushy living stealing identities from the dark web, sometimes combining real and fake PII (Personally Identifiable Information) to spawn synthetic “Frankenstein” identities. Barbosa used these identities to create fake Uber accounts she’d then rent out for a fee. Barbosa made over $10K per month loaning accounts to non-citizens who lacked the ID necessary to drive for Uber, including profits earned from driver referral bonuses.

When rideshare apps beefed up their verification processes and asked drivers to sign up in person, Barbosa found another way in, or moved on to other services like DoorDash and Instacart. Barbosa’s knack for improvisation was impressive, as was her deft usage of bots, GPS spoofing, and photo editing apps to avoid detection and forge documents.

By the time the FBI caught up to Barbosa and her “rideshare mafia,” she’d netted almost $800K in less than three years. Ostensibly, an EECS degree would be table stakes for such a large-scale operation but, as Barbosa showed, all that’s needed is a smartphone and a dream.

Synthetic landlords

The sharing economy faces its own synthetic crisis, perhaps most notably with home rental services like Airbnb and VRBO. Fake users, posing as landlords or property owners, are cashing in on fraudulent listings of properties, infuriating unsuspecting travelers and the rightful owners of said properties.

Surely, no one envies the poor woman in London whose home was listed on Booking.com—unbeknownst to her—and rented by tourists who, upon arrival, quickly found out they’d been duped. And this went on for weeks!

For its part, Airbnb has tried to stem the fake listing epidemic. Last year, Airbnb deleted 59K fake listings and stopped 157K from joining the app, even incorporating AI to bolster the verification process.

Little did Airbnb (and VRBO) know, their biggest rental scam yet would hit the newswire just a few months later. 10K fake Airbnb and VRBO reservations across 10 states. The damage: $8.5M. Indicted in January 2024, the two perpetrators steadily built their illegitimate short-term home rental business over the course of a few years, listing properties across the country under fake host names and stolen identities. Two people executing a con of this scale speaks to the intelligence of synthetic fraudsters—and the AI tools augmenting their efforts.

Love at first deepfake

Synthetic fraudsters are also running roughshod in the dating app world. Users seeking a love connection are falling for fake profiles and, in many cases, sending substantial amounts of money to their fraudulent “admirers.”

In 2023, there were more than 64K romance scams in the US, with total losses reaching $1.14B—and these numbers are conservative given that victims may be embarrassed to come forward. Dating apps are especially trying to appease Gen Z female users who are jumping ship. A Bumble survey found that nearly half of their female users feared fake profiles and scams on the platform.

Dating app cons, including schemes such as catfishing and pig butchering, are easily executed by synthetic fraudsters equipped with Gen AI. Deploying fake dating app profiles en masse? Enlisting automated, humanlike chatbots to seduce victims? Deepfaking audio and video, plus AI-generated profile pictures that can’t be reverse-image-searched via Google? It’s all possible with Gen AI, making synthetic fraudsters appear legitimate even to discerning users.

Just how many fakes are there on dating apps? The recent Netflix documentary Ashley Madison: Sex, Lies & Scandal revealed that 60% of the profiles on Ashley Madison app are bogus. Suddenly, blind dates with friends of friends don’t sound all that bad…

The gig is up

Considering the low barrier for entry and the democratization of Gen AI, among other factors, it might appear the deck is stacked against companies battling synthetic fraudsters, especially for smaller businesses not named Uber or Airbnb.

But renewed hope lies in a novel approach: catching these fake identities early in the account creation workflow. In fact, preemptive detection is the only way to neutralize AI-driven stolen and synthetic identities. Why? Because once these accounts are created, it’s essentially curtains for fraud prevention teams—too late in the game to distinguish the human-like behaviors of synthetics from their truly human counterparts.

Pre-account creation, on the other hand, allows for a bird’s-eye view that analyzes identities as a group rather than one by one. Verifying identities individually, i.e., the more traditional strategy, won’t cut it with synthetic and AI-driven stolen identities, but collective verification reveals signs of fraud that otherwise would go undetected.

For example, if multiple identities perform the same activity on the same website or app at the exact same time every week, something is likely afoot. To avoid a possible false positive, cross-referencing against trust signals like device, network, geolocation, and more assures fraud teams that flagging is the right move.

When tens of thousands (or more) of AI-powered, synthetic identities are storming account creation workflows, the preemptive, bird’s-eye approach is as fool-proof as it gets. The alternative: churn, lost revenue, and potentially a PR nightmare.

Does singling out synthetic accounts require a gargantuan chunk of real-time identity intelligence, on par with that of the “FAANG gang”? Yes. Is accessing this much data even possible? Believe it or not, also yes.

The Deduce Identity Graph packs the requisite real-time identity data to confidently deem an account real or fake, underscored by a trust score that is 99.5% accurate. This gives businesses of all sizes much more than a fighting chance. And for rideshare mafias, fake Airbnb landlords, and dating app swindlers, the gig may finally be up.

No person or institution is safe from Gen AI fraud

As AI-generated fraud continues to proliferate, so do the use cases demonstrating its deceptive, humanlike behavior. Automated and highly intelligent SuperSynthetic™ identities, driven by  Gen AI and deepfake technology, are defrauding banks, colleges, elections and any other institution or person they can make a pretty penny off of.

Deloitte expects Gen AI fraud to cost US banks $40B by 2027, up from $12.3 billion in 2023 (a 32% increase). This prognostication, which bodes negatively for industries outside of banking as well, isn’t as bold as it sounds. Just this year, a fraudster posing as a deepfaked CFO convinced a Hong Kong finance worker to wire nearly $26M.

AI-generated posts and comments are just as dangerous for businesses, though “sweeping” legislation (e.g., the EU Digital Services Act) fails to adequately address this threat. Meanwhile, tools such as FraudGPT, designed to write copy used in phishing attacks, can whip up social media posts that enrage a specific audience with the goal of fostering engagement and advertising revenue. 

The increased frequency of Gen AI fraud should make it a top priority for cybersecurity teams. From the upcoming Olympic games, to The King of Rock and Roll himself, here are some recent examples of Gen AI fraudsters wreaking havoc—and how businesses can protect themselves from similar attacks.

Elvis has not left the building

Elvis may be spared from the threat of Gen AI fraud, but his legendary Graceland mansion isn’t so lucky.

Last month, Graceland, now a museum, was reportedly awaiting foreclosure. Forged documents claimed the late Lisa Marie Presley (Elvis’ daughter) took out a $3.8M loan from a lending company later determined to be bogus, and had used Graceland as collateral.

Tourists lined up to visit the Graceland mansion. (Mandel Ngan/AFP via Getty Images)

This is a high-profile example of home title fraud in which bad actors pretend to be homeowners. After finding a suitable mark—someone elderly, recently deceased, or similarly vulnerable—fraudsters try to refinance or sell the house and cash out.

Fortunately, a judge stopped the sale of Graceland before the fake “Naussany Investments and Private Lending” company could profit from its clever caper. But the scheme, carried out by a prominent dark web fraudster with “a network of ‘worms’ placed throughout the United States,” shows how easy it is to deepfake IDs, documents, and signatures—even those of public figures.

Swiping right on dating app fraud

Dating apps are a hotbed, so to speak, for catfishing and pig butchering. While Gen AI has boosted the effectiveness of both scams, pig butchering—when a fraudster slowly builds rapport with a victim before asking them for money—is the most worrisome.

There were 64K confirmed romance scams in the US last year, totaling $1.14B in losses. This number is likely even higher because many victims, ashamed of being suckered out of thousands of dollars or cryptocurrency, don’t come forward.

Gen AI brings a whole new meaning to speed dating. (Getty Images/Futurism)

Swindling unsuspecting dating app users is easier than ever thanks to—you guessed it—deepfakes. Photos, audio, and video can all be AI-generated. AI-powered chatbots are practically indistinguishable from humans, and fraudsters can also leverage AI to deploy fake profiles at massive scale, all on autopilot. Background-checking a suspected fake user is unlikely to work for a variety of reasons; among them, large language models (ChatGPT, Gemini, etc.) convincingly build out social media profiles, and the unique AI profile pictures won’t appear via reverse-image-search.

Dating app fraud, obviously a significant user experience detractor for these businesses as well, inspired a popular 2024 Netflix documentary: Ashley Madison: Sex, Lies & Scandal. The documentary details how Gen AI chatbots on the titular dating app build credibility by showing familiarity with hotspots within a victim’s zip code. Perhaps the doc’s biggest claim is that 60% of Ashley Madison profiles are fake.

Russia “medals” with Paris Olympics

Russia, barred from competing in this year’s Olympics because of the war in Ukraine, used Gen AI and deepfakes to retaliate against the International Olympic Committee. Their goal: smear the committee’s reputation, and stoke fear of a potential terrorist attack at the Games to dissuade fans from attending.

Most notably, Russian bad actors posted a fake, disparaging online documentary (“Olympics Has Fallen”) and implied that Netflix had backed the production. To further legitimize the documentary, they generated bogus glowing reviews from The New York Times and other prominent news outlets, and used deepfaked audio from Tom Cruise to suggest his involvement and support.

A visual from Russia’s deepfaked documentary, “Olympics Has Fallen.”

The follow-up to this cringeworthy tactic was a series of fake news reports spreading more disinformation about the Olympics. A knockoff reproduction of a French newscast reported that nearly a quarter of purchased tickets for the Games in Paris had been returned due to fears of a terrorist attack. Another video, falsely attributed to the CIA and a French intelligence agency, urged spectators to avoid the Olympics because of, again, potential terrorism.

Russia’s deepfaked “medaling” with the Olympics is yet another example of why no organization or individual is safe from Gen AI-based fraud. So, what’s the fix?

Catch it early, or never at all

It’s never been more imperative to fortify synthetic fraud defenses, as these “Frankenstein” identities—stitched together using real and fake PII (Personally Identifiable Information)—are now SuperSynthetic identities with Gen AI and deepfakes at their disposal. It’s no wonder fraud is up 20% this year, with synthetics comprising 85% of all fraud cases.

Uphill as the battle might seem, the “early bird gets the worm” adage offers hope for finservs and other businesses threatened by Gen AI fraud. In addition to preemptive detection (prior to account creation), neutralizing Gen AI fraud—including SuperSynthetics—requires a heap of real-time, multicontextual, activity-backed identity intelligence. 

Obtaining the requisite identity intelligence to stop Gen AI fraud is a tall task for any company not named Google or Microsoft. But Deduce’s infrastructure, and unique fraud prevention strategy, is plenty tall enough.

Deduce’s “signature” approach differentiates itself from traditional antifraud tools that hunt fraudsters one by one. By looking at identities from a bird’s-eye view, or collectively, Deduce recognizes patterns of behavior that typify SuperSynthetics. Despite the humanlike nature and extreme patience of SuperSynthetic identities (they often “play nice” for months before striking), they aren’t perfect. Spotting multiple identities that perform social media or banking activities at the same day and time every week, for example, rules out the possibility of coincidence.

Cross-referencing these behavioral patterns with trust signals such as device, network, and geolocation, further roots out SuperSynthetic bad apples. Deduce’s trust scores are 99.5% accurate, so companies can rest assured knowing any identity deemed legit has been seen with recency and frequency on the Deduce Identity Graph. And if Deduce hasn’t seen an identity across its network, fraud teams can flag with confidence.

When it comes to Gen AI fraud and the SuperSynthetic identities it empowers, no person or institution is safe. Music icons, dating apps, the Olympic Games, finservs, you name it, are all susceptible to being deepfaked or otherwise bamboozled by a fake human. Any chance of fighting back demands preemptive, real-time identity intelligence. Catch ‘em early, or users, bottom lines, and reputations are in for a rude awakening.

Gen AI fraud flusters marketers, fraud teams, and customers

Big tech continues to tout the unprecedented intelligence and endless potential of generative AI. And for good reason: It’s tough to ignore the efficiency and reliability of a Gen AI superbrain that doesn’t sleep, call off, or overdrink at the office holiday party.

But as Gen AI gets smarter so do fraudsters. Fraud is already up 20% year-over-year, and the accessibility of AI has proliferated synthetic identities to a startling degree. 

Impersonation fraud, which includes synthetic “Frankenstein” identities consisting of real and fake PII (Personally Identifiable Information), accounts for 85% of all fraud. Synthetic identities are so prevalent that even Vanity Fair has likened it to “a Kafkaesque nightmare.”

Synthetics, bolstered by deepfake technology and realistic account activity, are nearly impossible to catch. Friend or foe? Real or fake? These questions are pulling marketing and fraud teams in opposite directions, and it’s customers (and businesses) who are paying the price.

SuperSynthetic™, super problematic

As of Q1 2022, one out of every four new accounts were fake. One can imagine how much that number has increased given the AI and synthetic fraud surge. The auto lending industry was hit the hardest in 2023, seeing a 98% spike in synthetic attempts to the tune of $7.9B in losses.

Once synthetics make it past the account verification stage it’s essentially game over. Shockingly, more than 87% of companies have extended credit to synthetic customers, and 76% of US banks have synthetic accounts in their database.

Traditional synthetic identities are hard enough to stop with their convincing mishmash of real and made-up PII, but their mighty offspring—SuperSynthetic™ identities—pack an even bigger punch.

Perhaps “mighty” is too strong a word considering the SuperSynthetic trademark is its monk-like patience. A fully automated SuperSynthetic identity plays the long game, making small deposits, checking account balances, and otherwise performing humanlike actions over the course of several months. Once enough trust is built, and a line of credit is extended, these fake customers transfer out their funds and exit stage left.

The trickery of SuperSynthetic identities isn’t limited to finservs. Colleges are now dealing with fake students, fake information on social media is flipping elections, and seemingly any platform utilizing an account creation workflow is vulnerable.

Banks are still the primary target, however, much to the chagrin of their marketing and fraud teams.

A churning sensation

There’s nothing wrong with tightening a leaky faucet, but overtightening can cause another leak. Similarly, “fixing” a synthetic identity problem by dialing up the fraud controls to 11 leads to more harm than good.

Indeed, many engineers on fraud teams are constricting their algorithms so rigidly that even slightly suspicious activity is flagged. VPN use, for example, is a callout despite the ubiquity of VPNs among today’s users. Innocuous shorthand of addresses (Main Street vs. Main St.) and names (Andy vs. Andrew) can also tip off jumpy fraud algos. A sign of the times, what used to be low risk is now classified as medium risk and formally medium risk is now high risk. 

False positives. ID verification. Manual reviews. Overly stringent fraud defenses annoy marketers and users like none other. The friction is often too unbearable for customers who would rather jump ship than jump through account verification hoops. Consumers, who expect instant gratification in today’s online market, don’t want to hear “Thanks for your application, we are reviewing it and will be in touch.” They’ll quickly start an application at a competing financial institution where they can receive instant credit.

The Deduce team has witnessed this friction firsthand. Our CTO, a customer of his bank for more than two decades, was forced to undergo document verification while using an account, device, and network that had previously been affirmed. Our VP of Marketing, a United Airlines customer for over three decades, was challenged on the United app for a CA-to-NY flight after he had already boarded the plane, passed TSA PreCheck, and scanned his boarding pass.

Friction is nightmarish for marketers as well, who have virtually no shot at meeting their customer acquisition KPIs. As shown in the image above, AI-powered synthetic fraud—and the rigid counterattacks used against it—leads to a three-pronged cluster-you-know-what: (a) more fraud, (b) more invasive verification checks that cost substantially more, and (c) more user friction that leads to account or loan abandonment and impacts lifetime value and customer acquisition costs.

Trust or bust

The key to ferreting out synthetic identities is to do the work early. Leverage real-time, multicontextual, activity-backed identity intelligence to stomp out synthetics pre-account creation.

Deduce employs the infrastructure, and strategy, that epitomize this preemptive solution. By taking a high-level, “signature” approach that differs from individualistic fraud tools, Deduce uncovers hidden digital footprints. Lifelike as synthetic fraudsters are, spotting cohorts of users that post on social media and perform identical account actions at the same time and day each week rules out the possibility of legitimacy.

Fraud teams can refrain from ratcheting up their algos knowing that Deduce’s trust scores are 99.5% accurate. If Deduce deduces a user is trustworthy, it’s seen that identity with recency and frequency via multiple trust signals, including, among others, device, network, geo location, IP, and a “VPN affinity” signal that identifies longtime VPN users.

47% of the 920M identities in the Deduce Identity Graph are trusted. In fact, Deduce is the only vendor in the market that returns a trusted score for an identity. Others offer a “low risk” score, which is risky enough for many fraud managers to flag, resulting in a false positive.  

Neutralizing synthetic fraud starts with trust, and it starts early. If you want to keep your marketing team and customers happy, and avoid the losses that come with overaggressive fraud controls, go the preemptive route—before things take a “churn” for the worse.

Some assembly required, but not much

You can be anyone you want to be.

Utter these words to your average cynic and their eyes will roll out of their sockets. But, thanks to AI, this phrase is now more truism than affirmation.

For fraudsters, AI may as well be a giant check from Publishers Clearing House. AI-generated synthetic identities net hefty payouts with minimal effort. Bad actors can seamlessly create and orchestrate synthetic identities at scale to fake out banks, execute election hacking schemes, or any other plot requiring AI-powered chicanery.

How does one go about creating a synthetic identity? It’s easier, and more lucrative, than you might think: Arkose Labs estimates that one in four new accounts is fake, and Cheq.ai reports that these fake bots and users steal $697B annually.

We’ve outlined the steps to making a synthetic identity below. (Insert “Don’t try this at home” disclaimer here.) No, we’re not trying to add more reinforcements to the growing army of AI-generated fraudsters. Just making sure banks and other finservs grok the magnitude of this cunning and highly intelligent cyberthreat.

Let’s dig in.

Step one: breaching bad

Creating synthetic identities begins with a big bang. A sizable breach occurs, like the recent AT&T heist affecting 70M+ total customers, and oodles of PII (personally identifiable information) are stolen and subsequently sold on the dark web. (A cursory look on Telegram will surface half a dozen “data brokers” offering data from AT&T.)

Recently deceased people’s SSNs (social security numbers) and infant SSNs are another crowd favorite for fraudsters. After all, the first group won’t need them again and the second group likely won’t need it for a couple of decades. In fact, Equifax—yes, the one from the 2017 data breach involving 147M stolen identities—recently announced they had 1.8M fake identities with SSNs in their database. 

PII is the lifeblood of any synthetic identity, and the dark web is essentially a flea market where the basic building blocks of a synthetic ID—first names, last names, SSNs, and DOBs (dates of birth)—can be purchased for pennies on the dollar.

On the dark web a synthetic fraudster buys a large batch of PII, usually tens of thousands of identities’ worth. Using stolen SSNs they can access FICO data, without triggering an alert to the legitimate owner of the number, then leverage AI to organize the thousands of identities by credit score (less than 600, 600-700, 700-800, etc.). Identities with scores below 700 would be matched to activities that bolster credit scores such as making charitable donations, and applying for and paying sub-prime or same-day loans. This essentially amounts to “pig butchering” their credit score over 700. (More on this later.)

Step two: signs of life

Next up: It’s time to give this fake human a “pulse.”

The first priority is to add an email address, ideally an aged and geo-located email address. Penny pinchers can create a free email address, but in either case the fraudster communicates via this email to build credibility. The email would also need to be matched with an identity in the same geography. It helps if the stolen identity boasts a high enough credit score to convince banks they’re onboarding an attractive new customer, but some fraud opportunities (such as subprime lending) don’t require a top-notch credit score.

Step two is to nab a new phone number, which comes in handy for authentication purposes. Apply the phone number to a cheap Boost Mobile phone or the like, and that’s enough to bypass 2FA (two-factor authentication) and OTP (one-time passcodes).

A fraudster using multiple smartphones to manage synthetic identities

Once the new synthetic account is live, it must avoid suspicion by interacting and existing online like a real human would. Filling out the rest of their profile details. Chatting with online support agents on e-commerce websites. Clicking the ads on a bank’s website that offer opportunities to apply for credit cards and loans.

Fake identities can further legitimize themselves by building out social media profiles on platforms like X or LinkedIn. Who’s to say they don’t actually work for IBM or some other Fortune 100 stalwart, or weren’t a graduate of Harvard or some other Ivy League school? Do customer onboarding teams have time to poke holes? Probably not.

Step three: building credit

After a synthetic identity acts like it’s a real human, all that’s left is to build credit before cashing out and moving on to another unsuspecting bank.

This is the trademark of the latest iteration of synthetics, known as SuperSynthetic™ identities. Rather than putting on ski masks and bull-rushing banks, SuperSynthetics prefer to take their sweet time.

Over the course of several months, a SuperSynthetic bot leverages its AI-generated identity to digitally deposit small amounts of money. In the meantime, it interacts with website and/or mobile app functions so as to not raise suspicion. SuperSynthetics might also build credit history by paying off cheap payday loans, and donating to charities that tie activity to its stolen SSN. While these modest deposits accumulate, the SuperSynthetic identity continues to consistently access its bank account (checking its balance, looking at statements, etc.).

The next generation of bots: SuperSynthetic identities

Eventually, the reputation of a “customer in good standing” is achieved. The identity’s credit risk worthiness score increases. A credit card or loan is extended. The fraudster starts warming up the getaway car.

Months of patience (12-18 months, on average) finally pays off when the bank deposits the loan or issues the credit card and the synthetic identity cashes out. It’s a systematic, slow-burn operation and it’s executed at scale. SuperSynthetic “sleeper” identities are actively preying on banks and finservs—by the thousands.

What now?

AI-powered synthetic and SuperSynthetic identities are wicked-smart, as are the criminal enterprises deploying them en masse. These aren’t black-hoodied EECS dropouts operating out of mom’s basement; the humans behind fake humans are well-funded and know who to target, namely smaller financial organizations like credit unions that lack the data and extensive fraud stacks and teams of a Bank of America or Chase.

Individuals aren’t safe either. Today’s fraudsters are leveraging social engineering and conversational AI tools such as ChatGPT to swindle regular Joes and Jans. Take “pig butchering” scams, for example. These drawn-out schemes start as a wrong number text message before, weeks or months later, recipients are tricked into making bogus crypto investments.

And if synthetic or SuperSynthetic identities require an extra layer of trickery, they can always count on deepfakes. Generative AI has elevated deepfakes to hyperrealistic proportions. To wit: A finance worker in China wired $25M following a video call with a deepfaked CFO.

Creating a synthetic identity is easy. Stopping one is tough. But the latter isn’t a lost cause.

A hefty amount of real-time, multicontextual, activity-backed identity intelligence is just what the synthetic fraud doctor ordered. That’s part of the solution, at least. Banks also need to switch up the philosophical approach underpinning their security stacks. The optimal of which is a “top-down” strategy that analyzes synthetic identities collectively rather than individually.

Doing this preemptively—prior to account creation—detects signature online behaviors and patterns of synthetic identities that otherwise would get lost in the sauce. Coincidence is ruled out. Synthetics are singled out.

If banks and finservs have any chance of neutralizing the newest evolution of synthetic fraudsters, this is the ticket. But the clock is ticking. SuperSynthetic identities grow in strength and number by the day. Businesses may not feel the damage for weeks or even months, but that long dynamite fuse culminates in a big, and possibly irreversible, boom.

The war between democracy and generative AI rages on

2024 is a big year for democracy. Half of the global population resides in countries that have an election this year, and all eyes, of course, will be on the main event: the grudge match between Joe Biden and Donald Trump.

But running parallel to this frenzied election year is the rapid evolution of artificial intelligence and its ever-growing assortment of applications. Just when we think it can’t get any smarter or more believable, AI leapfrogs our expectations. It’s exciting for users looking to post amusing memes or videos in their slack channels—not so much for politicians. Voters are frustrated, too. Is that audio clip of a candidate’s off-color remarks genuine, or deepfaked malarkey?

The deceptive tactics of generative AI-powered deepfakes and fake humans—and their potential to swing elections—isn’t just fodder for cybersecurity trades. Biden issued an executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” late last year. Several tech giants, including Google, Microsoft, and OpenAI, recently met at the Munich Security Conference and agreed to take “reasonable precautions” in stopping AI from influencing elections.

Executives from leading tech companies gather at the 2024 Munich Security Conference.

But Biden’s executive order doesn’t sufficiently address synthetic fraud, the Munich pact, according to critics, isn’t proactive enough, and fraudsters (especially those with AI at their disposal) are always a step ahead regardless of the countermeasures platforms or the government put in place. Furthermore, the tech companies tasked with hosting and moderating deepfaked content have laid off more than 40K workers. Without a new approach to neutralizing synthetic fraudsters, the fakery will continue to snowball.

Here are the ways in which generative AI is defrauding elections globally, and how a re-tooled approach may help social media and AI platforms fight back.

1-800-ROBO-CALL

Video deepfakes steal most of the headlines, but AI-generated audio is more advanced and democratized (at least until hyper-realistic video offerings like OpenAI’s Sora become widely available). One could even argue that deepfaked audio is more effective in altering elections, especially after a Biden robocall tried to dissuade people from voting in the New Hampshire primary.

Context, or lack thereof, is what makes audio deepfakes tough to recognize. The voters on the other end of the line lack the visual indicators that give video deepfakes away.

This context deficit bolsters the believability of so-called “grandparent” scams as well, in which a fraudster clones the voice of someone who’s close to the victim and convinces them to wire money. Personalization brings credibility. Just as Cameo users can have celebrities record birthday wishes for a loved one, now AI applied to voice or video patterns can have a personality or politician record a custom message.

If you’re in the business of artificially swaying voter sentiment and rigging elections, simply copy the voice of a relative or friend, spew some disinformation about Candidate XYZ or Prop ZYX, and move on to the next robocall.

In February, the FCC banned robocalls that use AI-generated voices. Time will tell if this puts audio deepfakers on hold. (Don’t count on it.)

A picture’s worth a thousand votes

AI image generators are also under the microscope. The Center for Countering Digital Hate, a watchdog group, found that tools like Midjourney and ChatGPT Plus can create deceptive images capable of spreading false political information.

The study, which additionally tested DreamStudio and Microsoft’s Image Creator, was able to create fake election imagery in more than 40% of cases. Midjourney performed significantly worse, generating disinformation 65% of the time—not a huge surprise considering the company didn’t sign the Munich Security Conference pact and only employs 11 team members.

The realistic nature of these images is startling. In March, an AI-generated photo purporting to show black Trump supporters posing with the former president was deemed fraudulent, apparently in an attempt to draw black voters away from the Biden campaign. Several AI-generated and equally bogus images of Trump being arrested also proliferated across social media.

AI-generated political images are incredibly lifelike.

Since the watchdog report, leading AI generators have put guardrails in place. The most obvious move is to disallow prompts involving “Biden” or “Trump.” However, jailbreaking maneuvers can sometimes bypass such controls. For example, instead of typing a candidate’s name, bad actors can key in their defining physical characteristics along with, say, “45th president,” and produce the desired image.

Take political candidates out of the equation. There are still other visuals that can sway voters. How about a fake image of a Trump supporter smashing a ballot box open, or Biden supporters lighting Mar-a-Lago ablaze? Election tampering campaigns don’t always target a specific candidate or political party but rather a divisive issue such as freedom of choice or border control. For instance, images of migrants illegally crossing the Rio Grande or climbing a fence, fake or not, are bound to rile up one group of voters.

A global crisis

International examples of AI-based election interference could portend trouble for the US, but hopefully will inspire technologists and government officials to rethink their cybersecurity approach.

In Slovakia, a key election was tainted by AI-generated audio that mimicked a candidate’s voice saying he had tampered with the election and, worse (for some voters) planned to raise beer prices. Indonesian Gen-Z voters warmed up to a presidential candidate and previously disgraced military general thanks to a cat-loving, “chubby-cheeked” AI-generated image of him. Bad actors in India, meanwhile, are using AI to “resurrect” dead political figures who in turn express their support for those currently in office.

An AI-generated avatar of M Karunanidhi, the deceased leader of India’s DMK party.

The image of the Indonesian presidential candidate is nothing more than a harmless campaign tactic, but are the other two examples the work of election-hacking-as-a-service schemes? Troubling a term as it might be, this is our new democratic reality: hackers contracted to unleash hordes of synthetic identities across social media, spreading false, AI-generated content to influence voter sentiment however they please.

An Israeli election hacking group dubbed “Team Jorge,” which controls over 30K fake social media profiles, meddled in a whopping 33 elections, according to a Guardian report. If similar groups aren’t already threatening elections in the US, they will soon.

The road ahead

Combatting AI-powered election fraud is an uphill battle, and Midjourney CEO David Holz believes the worst is yet to come. “Anybody who’s scared about fake images in 2024 is going to have a hard 2028,” Holz warned during a recent video presentation. “It will be a very different world at that point…Obviously you’re still going to have humans running for president in 2028, but they won’t be purely human anymore.”

What is the answer to this problem, this future Holz sees in which every political candidate has a lifelike “deepfake chatbot” armed with manufactured talking points? Raising public awareness of generative AI’s role in election tampering is important but, ironically, that can also backfire. As more people learn about the complexity and prevalence of deepfaked audio, video, and images, a growing sense of skepticism can hinder their judgment. Known as the “liar’s dividend” concept in political circles, this causes jaded, deepfake-conscious voters to mislabel genuine media as fake. It doesn’t help matters when presidential candidates label mainstream media similarly while publicizing their own view of the world.

Social media and generative AI platforms have their work cut out for them. Neutralizing, much less curbing, AI-powered election fraud pits them against artificial intelligence and synthetic identities that are disturbingly lifelike and nearly undetectable. This includes SuperSynthetic™ “sleeper” identities that can hack elections just as easily as they swindle finservs.

Deepfaked synthetic identities are too smart and real-looking to face head-on. Stopping these slithery fraudsters requires an equally crafty strategy, and a sizable chunk of real-time, multicontextual, activity-backed identity intelligence. Our money is on a “top-down” approach that, prior to account creation, analyzes synthetic identities collectively rather than individually. This bird’s eye view picks up on signature online behaviors of synthetic identities, patterns that rule out coincidence.

The Deduce Identity Graph is monitoring upwards of 30 million synthetic identities in the US alone. Some of these identities will attempt to “hack the vote” come November. Some already are. A high-level approach that examines them as a group—before they can deepfake unsuspecting voters—may be democracy’s best shot.

Celebrities, politicians, and banks face a deepfake dilemma

We’re reaching the “so easy, a caveman can do it” stage of the deepfake epidemic. Fraudsters don’t need a computer science degree to create and deploy armies of fake humans, nor will it drain their checking account (quite the opposite).

As if deepfake technology wasn’t accessible enough, the recent unveiling of OpenAI’s Sora product only simplifies—and complicates—matters. Sora, which for now is only available to certain users, produces photorealistic video scenes from text prompts. Not to be outdone, Alibaba demonstrated their EMO product making the Sora character sing. The lifelike videos created by such deepfake platforms fool even the ritziest of liveness detection solutions.

AI-powered fraud isn’t flying under the radar anymore—the prospect of taxpayers losing upwards of one trillion dollars will do that. One burgeoning scam, known as pig butchering, was featured on an episode of John Oliver. These scams start as a wrong number text message and, over the course of weeks or months, lure recipients into bogus crypto investments. Conversational generative AI tools like ChatGPT, combined with clever social engineering, make pig butchering a persuasive and scalable threat. Accompanying these texts with realistic deepfaked images only bolsters the perceived authenticity.

Companies are taking notice, too. So is the Biden administration, though its executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” in late 2023 didn’t sufficiently address synthetic fraud—specifically cases involving Generative AI and deepfakes.

The damage caused by AI-generated, deepfaked identities continues to worsen. Here is how it has permeated seemingly every facet of our lives, and how banks can stay one step ahead.

Hacking the vote

The 2024 presidential election is shaping up to be quite the spectacle, one that will capture the eyes of the world and, in all likelihood, further sever an already divided populace. Citizens exercising their right to vote is crucial, but the advancement of deepfake technology raises another concern: are voters properly informed?

Election-hacking-as-a-service sounds like the work of dystopian fiction, but it’s just the latest threat politicians and their constituents need to worry about. Highly sophisticated factions—in the US and abroad—are leveraging generative AI and deepfakes to weaponize disinformation and flip elections like flapjacks.

Some election meddlers have changed the outcome of 30+ elections. Remember the deepfaked Biden robocall ahead of the New Hampshire primary? That’s the handiwork of an election hacking superteam. A personalized text message or email might not be from [insert candidate here]. A video portraying an indecent remark could be fabricated. Some voters may say they’re “leaning” towards voting yay or nay on Measure Y or Prop Z, when in actuality they’re being pushed in either direction by synthetic election swingers.

In February, a slew of tech behemoths signed an accord to fight back against AI-generated election hacking. Like Biden’s executive order, the accord is a step in the right direction; time will tell if it pays dividends.

The case of the deepfaked CFO

Deepfaked audio and video is convincing enough to sway voters. It can also dupe multinational financial firms out of $25 million—overnight.

Just ask the Hong Kong finance worker who unknowingly wired about $25.6 million to fraudsters after attending a video conference call with who he thought were his fellow colleagues. A synthetic identity posing as the company’s CFO authorized the transactions—15 total deposits into five accounts—which the worker discovered were fraudulent after checking in with his corporate office.

It appears the bad actors used footage of past video conferences to create the deepfaked identities. Data from WhatsApp and emails helped make the identities look more legitimate, which shows the lengths these deepfaking fraudsters are willing to go.

A couple of years ago, fraudsters would have perpetrated this attack in a simpler fashion, via phishing, for example. But with the promise of bigger paydays, and much less effort and technical knowhow required thanks to the ongoing AI explosion, cyber thieves have every incentive to deepfake companies all the way to the bank.

The Taylor Swift incident

Celebrities, too, are getting a taste of just how destructive deepfakes can be.

Perhaps the most notable (and widely covered) celebrity deepfake incident happened in January when sexually explicit, AI-generated pictures of Taylor Swift popped up on social media. Admins on X/Twitter, where the deepfaked images spread like wildfire, eventually blocked searches for the images but not before they garnered nearly 50 million views.

Pornongraphic celebrity deepfakes aren’t a new phenomenon. As early as 2017, Reddit users were superimposing the faces of popular actresses—such as Scarlett Johansson and Gal Gadot—onto porn performers. But AI technology back then was nowhere near where it is today. Discerning users could spot a poorly rendered face-swap and determine a video or image was fake.

Shortly after the Taylor Swift fiasco, US senators proposed a bill that enables victims of AI-generated deepfakes to sue the videos’ creators—long overdue considering a 2019 report found that non-consensual porn comprised 96 percent of all deepfake videos.

Deepfaking the finservs

Whether it’s hacking elections, spreading pornographic celebrity deepfakes, or posing as a company’s CFO, deepfakes have never been more convincing or dangerous. And, because fraudsters want the most bang for their buck, naturally they’re inclined to attack those with the most bucks: banks, fintech companies, and other financial institutions.

The $25 million CFO deepfake speaks to just how severe these cases can be for finservs, though most deepfaking fraudsters prefer a measured approach that spans weeks or months. Such is the M.O. of  SuperSynthetic™ “sleeper” identities. This newest species of synthetic fraudster is too crafty to settle for a brute-force offensive. Instead, it leverages an aged and geo-located identity that’s intelligent enough to make occasional deposits and interact with a banking website or app for an extended period to appear like a genuine customer.

However, SuperSynthetics achieving their long-awaited goal—accept a credit card or loan offer, cash out, and scram—is contingent on one vital step: passing the onboarding process.

This is where deepfakes come in. During onboarding, SuperSynthetics can deepfake driver’s licenses and other forms of ID, even live video interviews if need be. Given the advancement in deepfake technology, and the unreliability of liveness detection, the only real chance banks have is to stop SuperSynthetic identities before they’re onboarded.

Using a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence, preemptively sniffing out SuperSynthetics is indeed possible. This is the foundation of a “top-down” approach that analyzes synthetic identities collectively—different from the one-by-one approach of the olden days. A bird’s eye view of identities uncovers signature online behaviors and patterns consistent enough to rule out a false positive. Multiple identities depositing money into their checking account every Wednesday at 9:27 p.m.? Something’s afoot.

The top-down approach is the surest and fastest way banks can ferret out synthetic identities and avoid getting deepfaked at the onboarding stage. But the clock is ticking. A study, commissioned by Deduce, found more than 75% of finservs already had synthetic identities in their databases, and 87% had extended credit to fake accounts.

Bank vs. Deepfake clearly isn’t a fair fight. But if banks do their work early, and subsequently avoid deepfakes altogether, their customers, reputations, and bottom lines will be the better for it.

Get ahead, or get left behind

New technology gets the people going. Just ask the folks coughing up a fair sum of cash for an Apple Vision Pro. Sure, these users may look like Splinter Cell operatives with their VR goggles on but, most likely, Apple’s foray into “spatial computing” will take off sooner rather than later.

However, before everyday users and even large enterprises can adopt new technologies, another category of users is way ahead of them: fraudsters. These proactive miscreants adopt the latest tech and find new ways to victimize companies and their customers. Think metaverse and crypto fraud or, most recently, the use of generative AI to create legions of humanlike bots.

Look back through the decades and a clear pattern emerges: new tech = new threat. Phishing, for example, was the offspring of instant messaging and email in the mid-1990s. Even the “advance fee” or “Nigerian Prince” scam we associate with our spam folders originally cropped up in the 1920s due to breakthroughs in physical mail.

What can we learn from studying this troubling pattern? How can businesses adopt the latest wave of nascent technologies while protecting themselves from opportunistic fraudsters? In answering these questions, it’s helpful to examine the major technological advancements of the past 20+ years—and how bad actors capitalized at every step along the way.

The 2000s

The 2000s ushered in digital identities and, by extension, digital identity fraud.

Web 1.0 and the internet had exploded by the early aughts. PCs, e-commerce, and online banking increased the personal data available on the web. As more banks transitioned to online, and digital-only banks emerged, fintech companies like PayPal hit the ground running and online transactions skyrocketed. Fraudsters pounced on the opportunity. Phishing, Trojan horse viruses, credential stuffing, and exploiting weak passwords were among the many tricks that fooled users and led to breaches at notable companies and financial institutions.

An example of a Nigerian Prince or “419” email scam

Phishing scams, in which bogus yet legitimate-looking emails persuade users to click a link and input personal info, took off in the 2000s and are even more effective today. Thanks to AI, including A-based tools like ChatGPT, phishing emails are remarkably sophisticated, targeted, and scalable.

Social media entered the frame in the 2000s, too, which opened a Pandora’s box of online fraud schemes that still persist today. The use of fake profiles provided another avenue for phishing and social engineering that would only widen with the advent of smartphones.

The 2010s

The 2010s were all about the cloud. Companies went gaga over low-cost computing and storage solutions, only to go bonkers (or broke) due to the corresponding rise in bot threats.

By the start of the decade, Google, Microsoft, and AWS were all-in on the cloud. The latter brought serverless computing to the forefront at the 2014 re:Invent conference, and the two other big-tech powerhouses followed suit. Then came the container-sance, the release of Docker and Kubernetes, the mass adoption of DevOps and hybrid and multicloud and so on. But, in addition to their improved portability and faster deployment, containers afforded bad actors (and their bots) another attack surface.

AWS unveils Lambda (and serverless computing) at re:Invent 2014

The rise of containers, cloud-native services, and other cloudy tech in the 2010s led to a boom in innovation, efficiency, and affordability for enterprises—and for fraudsters. Notably, the Mirai botnet tormented global cloud services companies using unprecedented DDoS (distributed denial of service) attacks, and the 3ve botnet accrued $30 million in click-fraud over a five-year span.

Malicious bots had never been cheaper or more scalable, brute force and credential stuffing attacks more seamless and profitable. The next tech breakthrough would catapult bots to another level of deception.

The 2020s

AI has blossomed in the 2020s, especially over the past year, and once again fraudsters have flipped the latest technological craze into a cash cow.

Amid the ongoing AI explosion, bad actors have specifically leveraged Generative AI and self-learning identity personalization to line their pockets. It’s hard to say what’s scarier—how human these bots appear, or how easy it is for novice users to create them. The widespread availability of data and AI’s capacity to teach itself using LLMs (large language models) has spawned humanlike identities at massive scale. Less technical fraudsters can easily build and deploy these identities thanks to tools like WormGPT, otherwise known as “ChatGPT’s malicious cousin.”

SuperSynthetic identities represent the next step in bot evolution

The most nefarious offshoot of AI’s golden age may be SuperSynthetic™ identities. The most humanlike of the synthetic fraud family tree, SuperSynthetics are all about the long con and don’t mind waiting several months to cash out. These identities, which can deepfake their way past account verification if need be, are realistically aged and geo-located with a legit credit history to boot, and they’ll patiently perform the online banking actions of a typical human to build trust and credit worthiness. Once that loan is offered, the SuperSynthetic lands its long-awaited reward. Then it’s on to the next bank.

Like Web 1.0 and cloud computing before it, AI’s superpowers have amplified the capabilities of both companies and the fraudsters who threaten their users, bottom lines and, in some cases, their very existence. This time around, however, the threat is smarter, more lifelike, and much harder to stop.

What now?

There’s undoubtedly a positive correlation between the emergence of technological trends and the growth of digital identity fraud. If a new technology hits the scene, fraudsters will exploit it before companies know what hit them.

Rather than getting ahead of the latest threats, many businesses are employing outdated mitigation strategies that woefully overlook the SuperSynthetic and stolen identities harming their pocketbooks, users, and reputations. Traditional fraud prevention tools scrutinize identities individually, prioritizing static data such as device, email, IP address, SSN, and other PII data. The real solution is to analyze identities collectively, and track dynamic activity data over time. This top-down strategy, with a sizable source of real-time, multicontextual identity intelligence behind it, is the best defense against digital identity fraud’s most recent evolutionary phase.

It’s not that preexisting tools in security stacks aren’t needed; it’s that these tools need help. At last count, the Deduce Identity Graph is tracking nearly 28 million synthetic identities in the US alone, including nearly 830K SuperSynthetic identities (a 10% increase from Q3 2023). If incumbent antifraud systems aren’t fortified, and companies continue to look at identities on a one-to-one basis, AI-generated bots will keep slipping through the cracks.

New threats require new thinking. Twenty years ago phishing scams topped the fraudulent food chain. In 2024 AI-generated bots rule the roost. The ultimatum for businesses remains the same: get ahead, or get left behind.

Synthetic customers are there, even if you don’t see them

There’s no denying that customer data platforms (CDPs) are a must-have tool for today’s companies. Consolidating customer data into one location is much more manageable. Aside from data privacy considerations—particularly in finance and healthcare—a CDP’s organized, streamlined view of customer data activates personalized user experiences and offers for existing customers while accurately identifying prospective customers who are most likely to drive revenue.

But synthetic fraud, which now accounts for 85% of all identity fraud, is infesting the tidiest and most closely monitored of CDPs. Most CDPs scan for telltale signs of fraud in real-time; however, synthetic fraudsters are too smart for that. The ubiquity of AI, and its ever-growing intelligence, enables bad actors to create and manipulate synthetic identities that appear more human than ever. The signs of fraud aren’t so obvious anymore, and the cybersecurity tools used by many companies aren’t up to snuff.

Effectively stomping out synthetic identity fraud requires an obsessive degree of CDP hygiene. This, of course, isn’t possible without a thorough understanding of what synthetic identities are capable of, how they operate, and the strategy companies must adopt to neutralize them.

Silent killers

No intelligence agency wants to readily admit it’s been infiltrated by a spy, and no CEO is exactly chomping at the bit to admit their company’s customer database is crawling with fake customers. When PayPal’s then-CEO, Dan Schulman, admitted to over 4 million fake customers it cost the fintech company over 25% in market capitalization. But these fraudsters are indeed there, camped out in CDPs and operating like legitimate customers—deposits, withdrawals, credit services, the whole nine.

A recent Wakefield report surveyed 500 senior fraud and risk professionals from the US. More than 75% of these executives said they had synthetic customers. Half of respondents deemed their company’s synthetic fraud prevention efforts somewhat effective, at best.

Perhaps most troubling? 87% of these companies admitted to extending credit to synthetic customers, and 53% of the time credit was extended proactively, via a marketing campaign, to the fraudster. These fraudsters aren’t just incredibly human-like and patient—they’re in it for the big haul. And according to the FTC’s 2022 report on identify fraud, the per-incident financial impact is in excess of $15K. 

Synthetic Sleeper identities, as we call them, can remain in CDPs for months, in some cases over a year. They deposit small amounts of money here and there while interacting with the website or mobile app like a real customer would. Once their credit worthiness gets a bump, and they qualify for a loan or line of credit, pay day is imminent. The fraudster performs a “bust-out,” or “hit-and-run.” The money is spent, and the bank is left with uncollectible debt.

This is not your grandmother’s synthetic identity. Such intelligence and cunning is the handiwork of synthetic fraud’s latest iteration: the SuperSynthetic™ identity.

SuperSynthetic, super slippery

How are synthetic fraudsters turning CDPs into their own personal clubhouses? Look no further than SuperSynthetic identities. The malevolent offspring of the ongoing generative AI explosion, SuperSynthetics are growing exponentially. In Deduce’s most recent Index, 828,095 SuperSynthetic identities are being tracked in the identity graph. These are hitting companies, especially banks, with costly smash-and-grabs at an unprecedented rate.

SuperSynthetics aren’t high on style points, but why opt for a brute force approach if you don’t need to? These methodical fraudsters are more than content playing the long game. Covering all of their bases allows for such patience—their credit history is legit; their identity is realistically aged and geo-located; and, for good measure, they can deepfake their way past selfie, video, or document verification.

Even the sharpest of real-time fraud detection solutions are unlikely to catch a SuperSynthetic. The usual hallmarks—an IP address or credit card being used for multiple accounts, behavioral changes over time—aren’t present. A SuperSynthetic is far too pedestrian to raise eyebrows, depositing meager dollar amounts over several months, regularly checking its account balance, paying bills and otherwise transacting innocuously until, finally, its reputation earns a credit card or loan offer.

Once the loan is transferred, or the credit card is acquired, it’s sayonara. The identity cashes out and moves on to the next bank. After all, the fraudster does not care about their credit score for that identity, one of dozens or hundreds they are manipulating. It has done its job and will be sacrificed for a highly profitable return.

Fake identities, real problems

Deduce estimates that 3-5% of financial services and fintech new accounts onboarded within the past two years are SuperSynthetic identities. Failing to detect these sleeper identities in a CDP hurts companies in a multitude of ways, all of which tie back to the bottom line.

Per the Wakefield report, 20% of senior US fraud and risk execs say synthetic fraud incidents rack up between $50K-$100K per incident. 23% put the number at $100K+. The low end of this range sitting at a whopping $50K should be alarming enough to reconsider preemptive counter measures against CDP breaches.

Another downside of synthetic infiltration is algorithm poisoning. Since the data for synthetic “customers” is inherently fake, this skews the models that drive credit decisioning. Risky applicants can be mistakenly offered loans, or vice versa. For banks, financial losses from algorithm poisoning are two-fold: erroneously extending credit to fake or unworthy customers; and bungling opportunities to extend credit to the right customers.

A signature approach

The good news for financial services organizations (and their CDPs) is the battle against synthetic, and even SuperSynthetic, identities is not a futile one. The same strategy that’s effective in singling out synthetic identities pre-NAO (New Account Opening) can help spot synthetics that have already breached CDPs.

Even if a SuperSynthetic has already bypassed fraud detection at the account opening stage, gathering identity activity from before, during, and after the NAO workflow and analyzing identities collectively, rather than one-by-one, unearths SuperSynthetic behavioral patterns.

Traditional fraud prevention tools take an individualistic approach, doubling down on static data such as device, email, IP address, for singular identities. But catching synthetic fraudsters, pre- or post-NAO, calls for tracking dynamic activity data over time. At a high level (literally), this translates to a top-down, or “birdseye,” strategy—powered by an enormous and scalable source of real-time, multicontextual identity intelligence—that verifies identities as a group or signature. Any other plan of attack is doubtful to pick up the synthetic scent.

Per the slide above, a unique activity-backed data set augments the data from a CDP and fraud platform to ferret out synthetic accounts. To catch these slithery fraudsters more data can and should be deployed. Knowing how an identity behaved online prior to becoming a customer bolsters the data science models used to give CDPs a synthetic spring cleaning.

What does this look like in practice? Say a real-time scan of in-app customer activity reveals, over an extended period, that multiple identities check their account balance every Thursday at exactly 8:17 a.m. Patterns such as this rule out coincidence and uncover the otherwise clandestine footprints of SuperSynthetic identities.

The intelligence and elusiveness of SuperSynthetics are increasing at a breakneck pace. In addition to terrorizing CDPs, SuperSynthetics have the potential to peddle sports betting accounts, carry out financial aid scams, and even swing the stock market via disinformation campaigns. Given what’s at stake, not combating SuperSynthetics with a thorough activity-driven approach, for some companies, might spell serious trouble in the year ahead.

College students are lifelong learners. So are AI-powered fraudsters.

With each passing day AI grows more powerful and more accessible. This gives fraudsters the upper hand, at least for now, as they roll out legions of AI-powered fake humans that even governmental countermeasures—such as the Biden administration’s recent executive order—will be lucky to slow down.

Among other nefarious activities, bad actors are leveraging AI to peddle synthetic bank and online sports betting accounts, swing elections, and spread disinformation. They’re also fooling banks with another clever gimmick: posing as college freshmen.

College students, particularly underclassmen, have long been a target demographic for banks. Fraudsters are well aware and know that banks’ yearning for customer acquisition, coupled with their inadequate fraud prevention tools, present an easy cash-grab opportunity (and, perhaps, a chance to revisit their collegiate years).

Early bank gets the bullion

The appeal of a new college student from a customer acquisition perspective can’t be understated.

A young, impressionable kid is striking out on their own for the first time. They need a credit card to pay for both necessary and unnecessary things (mostly the latter). They need a bank. And their relationship with that bank? There’s a good chance it will outlast most of their romantic relationships.

This could be their bank through college, through their working years, the bank they procure a loan from for their first house, the bank they encourage their kids and grandkids to bank with. In a college freshman banks don’t just land one client, but potentially an entire generation of clients. Lifetime value up the wazoo.

Go to any college move-in day and you’ll spot bank employees at tables, using giveaway gimmicks to attract students to open up new accounts. According to the Consumer Financial Protection Bureau, 40% of students attend a college that’s contractually linked to a specific bank. However, as banks shovel out millions so they can market their products at universities, a fleet of synthetic college freshmen lie in wait, with the potential to collectively steal millions of their own.

Playing the part

Today’s fraudsters are master identity-stealers who can dress up synthetic identities to match any persona.

In the case of a fake college freshman, building the profile starts off in familiar fashion: snagging a dormant social security number (SSN) that’s never been used or hasn’t been used in a while. Like many forms of Personally Identifiable Information (PII), stolen SSNs from infants or deceased individuals are readily available on the dark web.

From here, fraudsters can string together a combination of stolen and made-up PII to create a synthetic college freshman identity that qualifies for a student credit card. No branch visit necessary, and IDs can be deepfaked. The synthetic identity makes small purchases and pays them off on time—food, textbooks, phone bill—building trust with the bank and improving their already respectable credit score of around 700. They might sign up for an alumni organization and/or apply for a Pell Grant to further solidify their collegiate status.

Pell Grants, of course, require admission to a college—a process that, similar to acquiring a credit card from a bank, is easy pickings for synthetic fraudsters.

The ghost student epidemic

Any bank that doesn’t take the synthetic college freshman use case seriously should study the so-called “ghost student” phenomenon: fake college enrollees that rob universities of millions. 

In California, these synthetic students, who employ the same playbook as bank-swindling synthetics, comprise 20% of community college applications alone (more than 460K). Thanks to an increased adoption of online enrollment and learning post-pandemic, relaxed verification protocols for household income, and the proliferation of AI-powered fake identities, ghost students can easily grab federal aid and never have to attend class.

Like ghost students, synthetic college freshmen can apply for a credit card without ever stepping foot inside a bank branch. Online identity verification is a breeze for the seasoned bad actor. Given the democratization of powerful generative AI tools, ID cards and even live video interviews over Zoom or another video client can be deepfaked.

A (SuperSynthetic) tale old as time

Both the fake freshmen and ghost student problems are symptomatic of a larger issue: SuperSynthetic™ identities.

SuperSynthetic bots are the most sophisticated yet. Forget the brute force attacks of yore; SuperSynthetics are incredibly lifelike and patient. These identities play nice for several months or even years, building trust by paying off credit card transactions on time and otherwise interacting like a real human would. But, once the bank offers a loan and a big payday is in sight, that SuperSynthetic is out the door.

An unorthodox threat like a SuperSynthetic identity can’t be thwarted by traditional fraud prevention tools. Solutions reliant on individualistic, static data won’t cut it. Instead, banks (and universities, in the case of ghost students) need a solution powered by scalable and dynamic real-time data. The latter approach verifies identities as a group or signature: the only way to pick up on the digital footprints left behind by SuperSynthetics.

As human as SuperSynthetic identities are, they aren’t completely infallible. With a “birds eye” view of identities, patterns of activities—such as SuperSynthetics commenting on the same website at the exact same time every week over an extended period—quickly emerge.

Fake college students are one of the many SuperSynthetic personas capable of tormenting banks. But it isn’t the uphill battle it appears to be. If banks change their fraud prevention philosophy and adopt a dynamic, birds eye approach, they can school SuperSynthetics in their own right.