Synthetic fraud remains the elephant in the room

The Biden administration’s recent executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” naturally caused quite a stir among the AI talking heads. The security community also joined the dialog and expressed varying degrees of confidence in the executive order’s ability to protect the federal government and private sector against bad actors.

Clearly, any significant effort to enforce responsible and ethical AI use is a step in the right direction, but this executive order isn’t without its shortcomings. Most notable is its inadequate plan of attack against synthetic fraudsters—specifically those created by Generative AI.

With online fraud reaching a record $3.56 billion through the first half of 2022 alone, financial institutions are an obvious target of AI-based synthetic identities. A Wakefield report commissioned by Deduce found that 76% of US banks have synthetic accounts in their database, and a whopping 86% have extended credit to synthetic “customers.”

However, the shortsightedness of the executive order also carries with it a number of social and political ramifications that stretch far beyond dollars and cents.

Missing the (water)mark

A key element of Biden’s executive order is the implementation of a watermarking system to differentiate between content created by humans and AI, a topical development in the wake of the SAG-AFTRA strike and the broader artist-versus-AI clash. Establishing provenance of an object via a digital image or signature would seem like a sensible enough solution to identifying AI-generated content and synthetic fraud, that is, if all of the watermarking mechanisms currently at our disposal weren’t utterly unreliable.

A University of Maryland professor, Soheil Feizi, as well as researchers at Carnegie Mellon and UC Santa Barbara, circumvented watermarking verification by adding fake imagery. They were able to remove watermarks just as easily.

It’s also worth noting that the watermarking methods laid out in the executive order were developed by big tech. This raises concerns around a walled-garden effect in which these companies are essentially regulating themselves while smaller companies follow their own set of rules. And don’t forget about the fraudsters and hackers who, of course, will gladly continue using unregulated tools to commit AI-powered synthetic fraud, as well as overseas bad actors who are outside US jurisdiction and thus harder to prosecute.

The deepfake dilemma

Another element of many synthetic fraud attacks, deepfake technology, is addressed in the executive order but a clear-cut solution isn’t proposed. Deepfaking is as complex and democratized as ever—and will only grow more so in the coming years—yet the executive order falls short of recommending a plan to continually evolve and keep pace.

Facial recognition verification is employed at the government and state level, but even novice bad actors can use AI to deepfake their way past these tools. Today, anyone can deepfake an image or video with a few taps. Apps like FakeApp can seamlessly integrate someone’s face into an existing video, or generate an entirely new one. As little as a cropped face from a social media image can spawn a speaking, blinking, head-moving entity. Uploaded selfies and live video calls pass with flying colors.

In this era of remote customer onboarding, coinciding with unprecedented access to deepfake tools, it behooves executive orders and other legislation to offer a more concrete solution to deepfakes. Finservs (financial services) companies are in the crosshairs, but so are social media platforms and their users; the latter poses its own litany of dangers.

Synthetic fraud: multitudes of mayhem

The executive order’s watermarking notion and insufficient response to deepfakes don’t squelch the multibillion-dollar synthetic fraud problem.

Synthetic fraudsters still have the upper hand. With Generative AI at their disposal, they can create patient and incredibly lifelike SuperSynthetic™ identities that are extremely difficult to intercept. Worse, “fraud-as-a-service” organizations peddle synthetic mule accounts from major banks, and also sell synthetic accounts on popular sports betting sites—new, aged, geo-located—for as little as $260.

More worrisome, amid the rampant spread of disinformation online, is the potential for synthetic accounts to cause social panic and political upheaval.

Many users struggle to identify AI-generated content on X (formerly Twitter), much less any other platform, and social networks charging a nominal fee to “verify” an account offers synthetic identities a cheap way to appear even more authentic  All it takes is one post shared hundreds of thousands or millions of times for users to mobilize against a person, nation, or ideology. A single doctored image or video could spook investors, incite a riot, or swing an election. 

“Election-hacking-as-a-service” is indeed another frightening offshoot of synthetic fraud, to the chagrin of politicians (or those on the wrong side of it, at least). These fraudsters weaponize their armies of AI-generated social media profiles to sway voters. One outfit in the Middle East interfered in more than 33 elections.

Banks or betting sites, social uprisings or rigged elections, unchecked synthetic fraud, buttressed by AI, will continue to wreak havoc in multitudinous ways if it isn’t combated by an equally intelligent and scalable approach.

The best defense is a good offense

The executive order, albeit an encouraging sign of progress, is too vague in its plan for stopping AI-generated content, deepfakes, and the larger synthetic fraud problem. The programs and tools it says will find and fix security vulnerabilities aren’t clearly identified. What do these look like? How are they better than what’s currently available?

AI-powered threats grow smarter by the second. Verbiage like “advanced cybersecurity program” doesn’t say much; will these fraud prevention tools be continually developed so they’re in lockstep with evolving AI threats? To its credit, the executive order does mention worldwide collaboration in the form of “multilateral and multi-stakeholder engagements,” an important call-out given the global nature of synthetic fraud.

Aside from an international team effort, the overarching and perhaps most vital key to stopping synthetic fraud is an aggressive, proactive philosophy. Stopping AI-generated synthetic and SuperSynthetic identities requires a preemptive, not reactionary, approach. We shouldn’t wait for authenticated—or falsely authenticated—content and identities to show up, but rather stop synthetic fraud well before infiltration can occur. And, given the prevalence of synthetic identities, they should have a watermark all their own.

76% of finservs are victims of synthetic fraud

In 1938, Orson Welles’ infamous radio broadcast of The War of the Worlds convinced thousands of Americans to flee their homes for fear of an alien invasion. More than 80 years later, the public is no less gullible, and technology unfathomable to people living in the 1930s allows fake humans to spread false information, bamboozle banks, and otherwise raise hell with little to no effort.

These fake humans, also known as synthetic identities, are ruining society in myriad ways: tampering with electorate polls and census data, disseminating misleading social media posts with real-world consequences, sharing fake articles on Reddit that subsequently skew Large Language Models that drive platforms such as ChatGPT. And, of course, bad actors can leverage fake identities to steal millions from financial institutions.

The bottom line is this: synthetic fraud is prevalent; financial services companies (finservs), social media platforms, and many other organizations are struggling to keep pace; and the impact, both now and in the future, is frighteningly palpable.

Here is a closer look at how AI-powered synthetic fraud is infiltrating multiple facets of our lives.

Accounts for sale

If you need a new bank account, you’re in luck: obtaining one is as easy as buying a pair of jeans and, in all likelihood, just as cheap.

David Maimon, a criminologist and Georgia State University professor, recently shared a video from Mega Darknet Market, one of the many cybercrime syndicates slinging bank accounts like Girl Scout Cookies. Mega Darknet and similar “fraud-as-a-service” organizations peddle mule accounts from major bank brands (in this case Chase) that were created using synthetic identity fraud, in which scammers combine stolen Personally Identifiable Information (PII) with made-up credentials.

But these cybercrime outfits take it a step further. With Generative AI at their disposal, they can create SuperSyntheticTM identities that are incredibly patient, lifelike, and difficult to catch.

Aside from bank accounts, fraudsters are selling accounts on popular sports betting sites. The verified accounts—complete with name, DOB, address, and SSN—can be new or aged and even geo-located, with a two-year-old account costing as little as $260. Perfect for money launderers looking to wash stolen cash.

Fraudsters are selling stolen bank accounts as well as stolen accounts from sports betting sites.

Cyber gangs like Mega Darknet also offer access to the very Generative AI tools they use to create synthetic accounts. This includes deepfake technology which, besides fintech fraud, can help carry out “sextortion” schemes.

X-cruciatingly false

Anyone who’s followed the misadventures of X (formerly Twitter) over the past year, or used any social media since the late 2010s, knows that Elon’s embattled platform is a breeding ground for bots and misinformation. Generative AI only exacerbates the problem.

A recent study found that X users couldn’t distinguish AI-generated content (GPT-3) from human-generated content. Most alarming is that these same users trusted AI-generated posts more than posts from real humans.

In the US, where 20% of the population famously can’t locate the country on a world map, and elsewhere these synthetic accounts and their large-scale misinformation campaigns pose myriad risks, especially if said accounts are “verified.” It wouldn’t take much to incite a riot, or stoke anger and subsequent violence toward a specific group of people. How about sharing a bogus picture of an exploded Pentagon that impacts the stock market? Yep. That, too.

This fake image of an explosion near the Pentagon exemplifies the danger of synthetic accounts spreading misinformation.


Few topics are more timely and can rile up users like election interference, another byproduct of the fake human—and fake social media—epidemic. Indeed, the spreading of false information in service of a particular political candidate or party existed well before social media, but now the stakes have increased exponentially.

If fraud-as-a-service isn’t ominous-sounding enough, election-hacking-as-a-service might do the trick. Groups with access to armies of fake social media profiles are weaponizing disinformation to sway elections any which way. Team Jorge is just one example of these election meddling units. Brought to light via a recent Guardian investigation, Team Jorge’s mastermind Tal Hanan claimed he manipulated upwards of 33 elections.

The rapid creation and dissemination of fake social media profiles and content is far more harmful and widespread with Generative AI in the fold. Flipping elections is one of the worst possible outcomes, but grimmer consequences will arise if automated disinformation isn’t thwarted by an equally intelligent and scalable solution.

Finservs in the crosshairs

Cash is king. Synthetic fraudsters want the biggest haul, even if it’s a slow-burn operation stretched out over a long period of time. Naturally, that means finservs, who lost nearly $2 billion to bank transfer or payment fraud last year, are number one on their hit list. 

Most finservs today don’t have the tools to effectively combat AI-generated synthetic and SuperSynthetic fraud. First-party synthetic fraud—fraud perpetrated by existing “customers”—is rising thanks to SuperSynthetic “sleeper” identities that can imitate human behavior for months before cashing out and vanishing at the snap of a finger. SuperSynthetics can also use deepfake technology to evade detection, even if banks request a video interview during the identity verification phase.

It’s not like finservs are dilly-dallying. In a study from Wakefield, commissioned by Deduce, 100% of those surveyed had synthetic fraud prevention solutions installed along with sophisticated escalation policies. However, more than 75% of finservs already had synthetic identities in their customer databases, and 87% of those respondents had extended credit to fake accounts.

Fortunately for finservs and others trying to neutralize synthetic fraud, it’s not impossible to outsmart generative AI. With the right foundation in place—specifically a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence—and a change in philosophy, even a foe that grows smarter and more humanlike by the second can be thwarted.

This philosophical change is rooted in a top-down, bird’s-eye approach that differs from traditional, individualistic fraud prevention solutions that examine identities one by one. A macro view, on the other hand, sees identities collectively and groups them into a single signature which uncovers a trail of digital footprints. Behavioral patterns such as social media posts and account actions rule out coincidence. The SuperSynthetic smokescreen evaporates.

Whether it’s bad actors selling betting accounts, social media platforms stomping out disinformation, or finservs protecting their bottom lines, fake humans are more formidable than ever with generative AI and SuperSynthetic fraud at their disposal. Most companies seem to be aware of the stakes, but singling out bogus users and SuperSynthetics requires a retooled approach. Otherwise, revenue, users, and brand reputations will dwindle, and the ways in which fake accounts wreak havoc will multiply.

That rise in first-party synthetic fraud is no fluke. You have a SuperSynthetic identity problem.

Online fraud in the US totaled a record-breaking $3.56 billion through the first half of last year. Most consumer-facing companies have done the sensible thing and spent six or seven figures fortifying their perimeter defenses against third-party fraud.

But another effective, and seemingly counterintuitive, strategy for stopping today’s fraudsters is to think inside-out, not just outside-in. In other words, first-party synthetic fraud—or fraud perpetrated by existing “customers”—is threatening bottom lines in its own right, by way of AI–generated synthetic “sleeper” identities that play nice for months before executing a surprise attack.

Banks and other finserv (financial services) companies shouldn’t be surprised if their first-party synthetic fraud is off the charts. Deduce estimates that between 3-5% of new customers acquired in the past year are actually synthetic identities, specifically SuperSyntheticTM identities, created using generative AI.

The good news is that a simple change in philosophy will go a long way in neutralizing synthetic first-party fraudsters before they’re offered a loan or credit card.

First-party problems

Third-party fraud is when bad actors pose as someone else. It’s your classic case of identity theft. They leverage stolen credit card info and/or other credentials, or combine real and fake PII (Personal Identifiable Information) to create a synthesized identity, for financial or material gain. Consequently, the victims whose identities were stolen notice fraudulent transactions on their bank statements, or debt collectors track them down, and it’s apparent they’ve been had.

First-party synthetic fraud is even more cunning—and arguably more frustrating—because the account information and activity appear genuine, complicating the fraud detection process. The aftermath is where it hurts the most. Since, unlike third-party fraud, there isn’t an identifiable victim, finservs have no one to collect the debt from and are forced to bite the bullet.

Image Credit: Experian

One hallmark of first-party synthetic fraud is its patience. These sleeper identities appear legitimate for months, sometimes more than a year, making small deposits every now and then while interacting with the website or app like a real customer. Once they bump up their credit worthiness score and qualify for a loan or line of credit, it’s game over. The fraudster executes a “bust-out,” or “hit-and-run,” spending the money and leaving the bank with uncollectible debt.

This isn’t the work of your average synthetic identity. Such a degree of calculation and human-like sophistication can only be attributed to SuperSynthetic identities.

That escalated quickly

An Equifax report found that nearly two million consumer credit accounts, over the span of a year, were potentially synthetic identities. More than 30% of these accounts represented a major delinquency risk with cases averaging $8K-10K in losses.

The blame for rising first-party synthetic fraud—and the finservs left in its wake—can be placed squarely on the shoulders of SuperSynthetic identities. These AI-generated bots are proliferating worldwide, scaling their sleeper networks to execute bust-outs on a grand scale.

SuperSynthetics—featuring a three-pronged attack of synthetic identity fraud, legitimate credit history, and deepfake technology—need not brute-force their way into a bank’s pockets. Aside from a SuperSynthetic’s patient approach and aged, geo-located identity, its deepfake capability, a benefit of the recent generative AI explosion, is key to securing the long-awaited loan or credit card.

Selfie verification? A video interview? No problem. Deepfake tools, some of them free, are advanced enough to trick finservs even if they have liveness detection in their stack. Document verification? There’s a deepfake for that, too.

SuperSynthetics don’t have a kryptonite, per se. But analyzing identities from a different angle boosts the chances of a finserv spotting SuperSynthetics before they can circumvent the loan or credit verification stage.

Dusting for fingerprints

If finservs want to sniff out SuperSynthetic identities and successfully combat first-party synthetic fraud, they can’t be afraid of heights.

A top-down, bird’s-eye view is the best way to uncover the digital fingerprints or signatures of SuperSynthetics. Individualistic fraud prevention tools overlook these behavioral patterns, but a macro approach, which studies identities collectively, illuminates forensic evidence like a black light.

A top-down view reveals digital fingerprints that otherwise would go undetected.

Grouping identities into a single signature—and examining them alongside millions of fraudulent identities—reveals indisputable evidence of SuperSynthetic activities such as social media posts and account actions that consistently happen at the exact day and time each week by a group or signature of identities. Coincidence is out of the question.

Of course, not every finserv has the firepower to adopt this strategy. In order to enable a big-picture view, companies’ anti-fraud stacks need a large and scalable source of real-time, multicontextual, activity-backed identity intelligence.

There are other avenues. Consider, for example, the only 100-percent foolproof solution to first-party synthetic fraud: in-person identity verification. Even if this approach was used exclusively at the pre-loan juncture it seems unlikely that many companies would take on the added friction, though driving down to the bank is a small price to pay for a five or ten thousand-dollar loan.

If finservs don’t wish to revisit the good old days of face-to-face verification, the top-down, signature approach is the only other viable deterrent to first-party synthetic fraud. Solutions that analyze identities one by one won’t stop SuperSynthetics before a loan or credit card is granted, and by that point it’s already over.

An old-school approach could be the answer for finservs

For many people, video conferencing apps like Zoom made work, school, and other everyday activities possible amid the global pandemic—and more convenient. Remote workers commuted from sleeping position to upright position. Business meetings resembled “Hollywood Squares.” Business-casual meant a collared shirt up top and pajama pants down low.

Fraudsters were also quite comfortable during this time. Unprecedented amounts of people sheltering in place naturally caused an ungodly surge in online traffic and a corresponding increase in security breaches. Users were easy prey, and so were many of the apps and companies they transacted with.

In the financial services (finserv) sector, branches closed down and ceased face-to-face customer service. Finserv companies relied on Zoom for document verification and manual reviews, and bad actors, armed with stolen credentials and improved deepfake technology, took full advantage.

Even in the face of AI-Generated identity fraud most finservs still use remote identity verification to comply with regulator KYC requirements, and when it comes time to offer a loan. It’s easier than meeting in person, and what customer doesn’t prefer verifying their identity from the comfort of their couch?

But AI-powered synthetic identities are getting smarter and, while deepfake deterrents are closing the gap, a return to an old-school approach remains the only foolproof option for finservs.

Deepfakes, and the SuperSynthetic™ quandary

Gen AI platforms such as ChatGPT and Bard, coupled with their nefarious brethren FraudGPT and WormGPT and the like, are so accessible it’s scary. Everyday users can create realistic, deepfaked images and videos with little effort. Voices can be cloned and manipulated to say anything and sound like anyone. The rampant spread of misinformation across social media isn’t surprising given that nearly half of people can’t identify a deepfaked video.

More disturbing: deepfaked Mona Lisa, or that someone made this 3+ years ago?

Finserv companies are especially susceptible to deepfaked trickery, and bypassing remote identity verification will only get easier as deepfake technology continues to rapidly improve.

For SuperSynthetics, the new generation of fraudulent deepfaked identities, fooling finservs is quite easy. SuperSynthetics—a one-two-three punch of deepfake technology and synthetic identity fraud and legitimate credit histories—are more humanlike and individualistic than any previous iteration of bot. The bad actors who deploy these SuperSynthetic bots aren’t in a rush; they’re willing to play the long game, depositing small amounts of money over time and interacting with the website to convince finservs they’re prime candidates for a loan or credit application.

When it comes time for the identity verification phase, SuperSynthetics deepfake their documents, selfie, and/or video interview…and they’re in.

An overhaul is in order

Deepfake technology, which first entered the mainstream in 2018, is still relatively infantile yet pokes plenty of holes in remote identity verification.

The “ID plus selfie” process, as Gartner analyst Akif Khan calls it, is how most finservs are verifying loan and credit applicants these days. The user takes a picture of their ID or driver’s license, authenticity is confirmed, then the user snaps a picture of themselves. The system checks the selfie for liveness and makes sure the biometrics line up with the photo ID document. Done.

The process is convenient for legitimate customers and fraudsters alike thanks to the growing availability of free deepfake apps. Using these free tools, fraudsters can deepfake images of docs and successfully pass the selfie step, most commonly by executing a “presentation attack” in which their primary device’s camera is aimed at the screen of a second device displaying a deepfake.

Khan advocates for a layered approach to deepfake mitigation, including tools that detect liveness and check for certain types of metadata. This is certainly on point, but there’s an old-school, far less technical way to ward off deepfaking fraudsters. Its success rate? 100%.

The good ol’ days

Remember handshakes? How about eye contact that didn’t involve staring into a camera lens? These are merely vestiges of the bygone in-person meetings that many finservs used to hold with loan applicants pre-COVID.

Outdated, and less efficient, as face-to-face meetings with customers might be, they’re also the only rock-solid defense against deepfakes.

Not even advanced liveness detection is a foolproof deepfake deterrent.

Sure, the upper crust of finserv companies likely have state-of-the-art deepfake deterrents in place (i.e., 3D liveness detection solutions). But liveness detection doesn’t account for deepfaked documents or, more importantly, video, or the fact that the generative AI tools available to fraudsters are advancing just as fast as vendor solutions, if not faster. It’s a full-blown AI arms race, and with it comes a lot of question marks.

In-person verification (only for high-risk activities) puts these fears to bed. Is it frictionless? Obviously far from it, though workarounds, such as traveling notaries that meet customers at their residence, help ease the burden. But if heading down to a local branch for a quick meet-and-greet is what it takes to snag a $10K loan, will a customer care? They’d probably fly across state lines if it meant renting a nicer apartment or finally moving on from their decrepit Volvo.

Time to layer up

Khan’s recommendation, for finservs to assemble a superteam of anti-deepfake solutions, is sound, so long as companies can afford to do so and can figure out how to orchestrate the many solutions into a frictionless consumer experience. Vendors indeed have access to AI in their own right, powering tools that directly identify deepfakes through patterns, or that key in on metadata such as the resolution of a selfie. Combine these with the most crucial layer, liveness detection, and the final result is a stack that can at the very least compete against deepfakes.

SuperSynthetics aren’t as easy to neutralize. In previous posts, we’ve advocated for a “top-down” anti-fraud solution that spots these types of identities before the loan or credit application stage. Contrary to individualistic fraud prevention tools, this bird’s-eye view reveals digital fingerprints—concurrent account activities, simultaneous social media posts, etc.—that otherwise would go undetected.

In the meantime, it doesn’t hurt to consider the upside of an in-person approach to verifying customer identities (prior to extending a loan, not onboarding). No, it isn’t flashy, nor is it flawless. However, it is reliable and, if finservs effectively articulate the benefit to their customers—protecting them from life-altering fraud—chances are they’ll understand.

Customer or AI-Generated Identity? The lines are as blurry as ever.

Today’s fraudsters are truly, madly, deeply fake.

Deepfaked identities, which use AI-generated audio or visuals to pass for a legitimate customer, are multiplying at an alarming rate. Banks and other fintech companies—who collectively lost nearly $2 billion to bank transfer or payment fraud in 2022, are firmly in their crosshairs.

Sniffing out deepfaked chicanery isn’t easy. One study found that 43% of people struggle to identify a deepfaked video. It’s especially concerning that this technology is still relatively infantile and already capable of luring consumers and businesses into fraudulent transactions.

Over time, deepfakes will seem increasingly less fake and much harder to detect. In fact, an offshoot of deepfaked synthetic identities, the SuperSynthetic™ identity, has already emerged from the pack. Banks and financial organizations have no choice but to stay on top of developments in deepfake technology and swiftly adopt a solution to combat this unprecedented threat.

Rise of the deepfakes

Deepfakes have come a long way since exploding onto the scene roughly five years ago. Back then, deepfaked videos aimed to entertain. Most featured harmless superimpositions of one celebrity’s face onto another, such as this viral Jennifer Lawrence-Steve Buscemi mashup.

The trouble started when users began deepfaking sexually explicit videos, opening up a massive can of privacy- and ethics-related worms. Then a 2018 video of a deepfaked Barack Obama speech showed just how dangerous the technology could be.

Image Credit: DHS

The proliferation and growing sophistication of deepfakes over the past five years can be attributed to the democratization of AI and deep learning tools. Today, anyone can doctor an image or video with just a few taps. FakeApp and Lyrebird and countless other apps enable smartphone users to seamlessly integrate someone’s face into an existing video, or generate a new video that can easily pass for the real deal.

Given this degree of accessibility, the threat of deepfakes to banks and fintech companies will only intensify in the months and years ahead. The specter of new account fraud, perpetrated by way of a deepfaked synthetic identity, looms large in the era of remote customer onboarding.

This is a stickup

Synthetic identity fraud, in which bad actors invent a new identity using a combination of stolen and made-up credentials, has already cost banks upwards of $6 billion. Deepfake technology only adds fuel to the fire.

A deepfaked synthetic identity heist doesn’t require any heavy lifting. A fraudster crops someone’s face from a social media picture and they’re well on their way to spawning a lifelike entity that speaks, blinks, and moves its head on screen. Image- or video-based identity verification, KYC protocol designed to deter potential fraud before an account is opened or extended credit, is moot. The fraudster’s uploaded selfie will be a dead ringer for the face on the ID card. Even a live video conversation with an agent is unlikely to ferret out a deepfaked identity.

Not even Columbo can spot a deepfaked synthetic identity.

Audio-based verification processes are circumvented just as easily. Exhibit A: the vulnerability of the voice ID technology used by banks across the US and Europe, ostensibly another layer of login security that prompts users to say some iteration of, “My voice is my password.” This sounds great in theory, but AI-generated audio solutions can clone anyone’s voice and create a virtually identical replica. One user, for example, tapped voice creation tool ElevenLabs to clone his own voice using an audio sample. He accessed his account in one try.

In this use case, the bad actor would also need a date of birth to access the account. But, thanks to frequent big-time data leaks—such as the recent Progress Corp breach—dates of birth and other Personally Identifiable Information (PII) are readily available on the dark web.

Here come the SuperSynthetics

In deepfaked synthetic identities, banks and financial services platforms clearly face a formidable foe. But this worthy opponent has been in the gym, protein-shaking and bodyhacking itself into something stronger and infinitely more dangerous: the SuperSynthetic identity.

SuperSynthetic identities, armed with the same deepfake capabilities as regular synthetics (and then some), bring an even greater level of Gen AI-powered smarts to the table. No need for a brute force attack. SuperSynthetics operate with a sophistication and discernment that is so lifelike it’s spooky. In this regard, one must only look at the patience of these bots.

SuperSynthetics are all about the long con. Their aged and geo-located identities play nice for months, engaging with the website and making small deposits here and there, enough to appear human and innocuous. Once enough of these transactions accumulate, and trust is gained from the bank, a credit card or loan is extended. Any additional verification is bypassed via deepfake, of course. When the money is deposited into their SuperSynthetic account the bad actor immediately withdraws it, along with their seed money, before finding another bank to swindle.

How prevalent are SuperSynthetics? Deduce estimates that between 3-5% of financial services accounts onboarded within the past year are in fact SuperSynthetic “sleepers” waiting to strike. It certainly warrants a second look at how customers are verified before obtaining a loan or credit card, including the consideration of in-person verification to rule out any deepfake activity.

No time like the present

If deepfaked synthetic identities don’t call for a revamped cybersecurity solution, deepfaked SuperSynthetic identities will certainly do the trick. Our money is on a top-down approach that views synthetic identities collectively rather than individually. Analyzing synthetics as a group uncovers their digital footprints—signature online behaviors and patterns too consistent to suggest mere coincidence.

Whatever banks choose to do, kicking the can down the road only works in favor of the fraudsters. With every passing second, the deepfakes are looking (and sounding) more real.

Time is a-tickin’, money is a-burnin, and customers are a-churnin’.

How SuperSynthetic identities carry out modern day bank robberies

The use cases for generative AI continue to proliferate. Need a vegan-friendly recipe for chocolate cookies that doesn’t require refined sugar? Done. Need to generate an image of Chuck Norris holding a piglet? You got it.

However, not all Gen AI use cases are so innocuous. Fraudsters are joining the party and developing tools like WormGPT and FraudGPT to launch sophisticated cyberattacks that are significantly more dangerous and accessible. Consumer and enterprise companies alike are on high alert, but fintech organizations really need to upgrade their “bot-y” armor.

Each new wave of bots grows increasingly stronger and brings its unique share of challenges to the table—none more than synthetic “Frankenstein” identities consisting of real and fake PII data. But, alas, the next evolution of synthetic identities has entered the fray: SuperSyntheticTM identities.

Let’s take a closer look at how these SuperSynthetic bots came to be, how they can effortlessly defraud banks, and how banks need to change their account opening workflows.

The evolution of bots

Before we dive into SuperSynthetic bots and the danger they pose to banks, it’s helpful to cover how we got to this point.

Throughout the evolution of bots we’ve seen the good, the bad, and the downright nefarious. Well-behaved bots like web crawlers and chatbots help improve website or app performance; naughty bots crash websites, harm the customer experience and, worst of all, steal money from businesses and consumers.

The evolutionary bot chart looks like this:

Generation One: These bots are capable of basic scripting and automated maneuvers. Primarily they scrape, spam, and perform fake actions on social media apps (comments, likes, etc.).

Generation Two: Web analytics, user interface automation, and other tools that enable the automation of website development.

Generation Three: This wave of bots adopted complex machine learning algorithms, allowing for the analysis of user behavior to boost website or app performance.

Generation Four: These bots laid the groundwork for SuperSynthetics. They’re highly effective at simulating human behavior while staying off radar.

Generation Five: SuperSynthetic bots with a level of sophistication that negates the need to execute a brute force attack hoping for a fractional chance of success. Individualistic finesse, combined with the bad actor’s willingness to play the long game, makes these bots undetectable by conventional bot mitigation and synthetic fraud detection strategies.

Playing the slow game

So, how have SuperSynthetics emerged as the most formidable bank robbers yet? It’s more artifice than bull rush.

Over time, a SuperSynthetic bot uses its AI-generated identity to deposit small amounts of money via Zelle, ACH, or another digital payments app while interacting with various website functions. The bot’s meager deposits accumulate over the course of several months, and regular access to its bank account to “check its balance” earns the reputation of a “customer in good standing.” Its credit risk worthiness score increases and an offer of a credit card or a personal, unsecured loan is extended.

At this point it’s hook, line, and sinker. The bank deposits the loan amount or issues the credit card and the fraudster transfers it out, along with their seed funds, and moves on to the next unsuspecting bank. This is a cunning, slow-burn operation only a SuperSynthetic identity can successfully carry out at scale. Deduce estimates that between 3-5% of accounts onboarded within the past year at financial services and fintech institutions are in fact SuperSynthetic Sleeper identities.

Such patience and craftiness is unprecedented in a bot. Stonewalling SuperSynthetics takes an equally novel approach.

A change in philosophy

Traditional synthetic fraud prevention solutions won’t detect SuperSynthetic identities. Built around static data, these tools lack the dynamic, real-time data and scale needed to sniff out an AI-generated identity. Even manual review processes and tools like DocV are no match as deepfake AI methods can create realistic documents and even live video interviews.

An individualistic approach offers little resistance to SuperSynthetic bots.

Fundamentally, these static-based tools take an individualistic approach to stopping fraud. The data that’s pulled from a range of sources during the verification phase is only analyzing one identity at a time. In this case, a SuperSynthetic identity will appear legitimate and pass all the verification checks. Fraudulent patterns missed. Digital forensic footprints overlooked.

A philosophical change in fraud prevention is foundational to banks keeping SuperSynthetic bots out of their pockets. Verifying identities as a collective group, or signature, is the only viable option.

A view from the top

Things always look different from the top floor. In the case of spotting and neutralizing SuperSynthetic identities, a big-picture perspective reveals digital footprints otherwise obscured by an individualistic anti-fraud tool.

A bird’s-eye view that groups identities into a single signature uncovers suspicious evidence such as simultaneous social media posts, concurrent account actions, matching time-of-day and day-of-week activities, and other telltale signs of fraud. Considering the millions of fraudulent identities in the mix, it’s illogical to attribute this evidence to mere happenstance.

There’s no denying that SuperSynthetic identities have arrived. No prior iteration of bot has ever appeared so lifelike and operated with such precision. If banks want to protect their margins and user experience, verifying identity via a signature approach is a must. This does require bundling existing fraud prevention stacks with ample (and scalable) real-time identity intelligence, but the first step in thwarting SuperSynthetics is an ideological one: co-opt the signature strategy.

How a top-down approach can unmask AI-generated fraudsters

Whomever’s side of the AI debate you’re on there’s no denying that AI is here to stay, and has barely started to tap its potential.

AI makes life easier on consumers and businesses alike. However, the proliferation of AI-based tools helps fraudsters as well.

As the AI arms race heats up, one emerging threat that’s tormenting businesses is AI-generated identity fraud. With help from generative AI, fraudsters can easily use previously acquired PII (Personal Identifiable Information) to establish a credible online identity that appears human-like, replete with an OK credit history, then leverage deepfakes to legitimize a synthetic identity with documents, voice, and video. As of April 2023, audio and video deepfakes alone have duped one-third of companies..

Without the proper fortification in place, financial services and fintech businesses are prime targets for AI-generated identities, new account opening fraud, and the resultant revenue loss.

The (multi)billion-dollar question is, how do these companies fight back when AI-generated identities are seemingly indistinguishable from real customers?

Playing the long game

There are several ways in which AI helps create synthetic identities.

For one, social engineering and phishing with AI-powered tools is as easy as “PII.” Generative AI can crank out a malicious yet convincing email or deepfake a document or voice to obtain personal info. In terms of scalability, fraudsters can now manage thousands of fake identities at once thanks to AI-assisted CRMs and marketing automation software and purpose-built platforms for committing fraud such as FraudGPT and WormGPT. Thousands of synthetics creating “aged” and geo-located email addresses, signing up for newsletters, and making social media profiles and other accounts—all on autopilot. This unparalleled sophistication is the hallmark of an even more formidable synthetic identity: the SuperSyntheticTM identity.

Thanks to AI’s automation and effective utilization of previously stolen PII data, SuperSynthetic identities can assemble a credible trail of online activity. But these SuperSynthetics have a credible (maybe not an 850 but a solid 700) credit history, too. Therein lies the other challenge with AI-generated identity fraud: the human bad actors behind the computer or phone screen, pulling the strings, are remarkably patient. They’ll invest actual money by making deposits over time into a newly opened bank account, or make small purchases on a retailer’s website to build “existing customer” status, to gradually forge a bogus identity that lands them North of $15K (according to the FTC, a net ROI of thousands of dollars). AI-generated fraud is a very profitable business.

The chart above shows how a fraudster boosts credibility for an identity both online and with credit history before opening a credit card or loan, or even transacting via BNPL (Buy Now Pay Later). They sign up for cheap mobile phone plans, such as Boost, Mint, or Cricket, or make small pre-paid debit card donations to charities linked to their social security number. They can even use AI to find rental vacancies in MLS listings in a geography that maps to their aged and geo-located legend, in order to establish an online activity history of paying utility bills. The patience, calculation, and cunning of these fraudsters is striking—and just as dangerous as the AI that fuels their SuperSynthetic identities.

Looking at the big picture

Neutralizing AI-generated identity fraud requires a new approach. Traditional bot mitigation and synthetic fraud prevention solutions reliant upon static data about a single identity need some extra oomph to stonewall persuasive SuperSynthetics.

These static data-based tools lack the dynamic, real-time data and scale necessary to pick up the scent of AI-generated identity fraud. Patterns and digital forensic footprints get overlooked, and the sophistication of these fake identities even outflanks manual review processes and tools like DocV.

The bigger problem is that, when today’s anti-fraud solutions pull data from a range of sources during the verification phase, they’re doing so on an individual identity basis. Why is this problematic? Because a SuperSynthetic identity on its own will look legitimate and pass all the verification checks—including a manual review, the last bastion of fraud prevention. However, analyzing that same identity from a high-level vantage point changes everything. The identity is revealed to be a member of a larger signature of SuperSynthetic identities. Like a black light, this bird’s-eye view uncovers previously obscured, digital forensic evidence. 

But what does this evidence even look like? And what does it take to transition from an individualistic to a signature-centered approach?

The key to the evidence locker

AI-generated SuperSynthetic identities leave behind a variety of digital fingerprints or signatures. A top-down view reveals suspicious patterns across millions of fraudulent identities that are too identical to be a coincidence. 

For example, if the same three identities post a comment on the New York Times website every Tuesday morning at 7:32 a.m. PST, the chances these are three humans are infinitesimally small and therefore it’s clear that each is in fact SuperSynthetic.

Switching over to a top-down approach isn’t merely a philosophical change. Unlocking the requisite evidence to thwart AI-generated identities demands premium identity intelligence at scale, combined with sophisticated ML that gathers and analyzes large swaths of real-time data from diverse sources.

In short, an activity-based, real-time identity graph capable of sifting through hundreds of millions of identities.

Protect your margins (and UX)

A ginormous real-time identity graph rivaling the likes of big tech? This may seem like an unrealistic path to stopping AI-generated identities. It isn’t.

Deduce employs the largest identity graph in the US: 780 million US privacy-compliant identity profiles and 1.5 billion daily user events across 150,000+ websites and apps. Additionally, Deduce has previously seen 89% of new users at the account creation stage—where AI-generated synthetics typically pass through undetected—and 43% of these users hours before they enter the new account portal.

Deduce’s premium identity intelligence, patented technology, and formidable ML algorithms enable a multi-contextualized, top-down approach. Identities are analyzed against signatures of synthetic fraudsters—hundreds of millions of them—to ensure they’re the real McCoy. It’s a far superior alternative to overtightening existing risk models and causing unnecessary friction followed by churn, reputational harm, and revenue loss.

Want to outsmart AI-generated identity fraud while preserving a trusted user experience? Contact us today.

In the war against digital goods fraud, real-time is the only time

Today’s customer wants it, and they want it now. But, until teleportation becomes a reality, purchasing physical products online won’t satisfy their need for instant gratification. This is a big reason why digital goods purchases jumped 51% from 2020 to 2021, and, at last check, 65% in 2022.

The instant delivery of digital goods—software, event tickets, and especially online gift cards—spells trouble for e-commerce companies. Whereas physical merchandise often provides a multi-day delivery buffer allowing ample time for fraud detection and investigation, digital download products require an immediate decision on fraud. This overburdens the majority of fraud stacks and gives fraudsters the upper hand.

How can e-commerce merchants effectively combat digital goods fraud at scale—and steer clear of false positives—when these purchases demand a snap judgment?

The digital goods dilemma

Analysts anticipate that e-commerce merchants will lose about $24 billion to online payments fraud by 2024. Along with other types of e-commerce fraud, such as friendly fraud, new account fraud, and account takeover (ATO), digital goods fraud will wreak havoc in its own right.

Unlike purchasing physical products online, the instantaneous nature of buying digital products presents unique challenges:

Immediate delivery. With digital goods arriving to customers in mere seconds, e-commerce companies with traditional fraud stacks can’t review orders, successfully sniff out fraudulent transactions, or initiate manual reviews. It’s real-time or nothing, approve or reject.

False positives. Split-second approval times depend on strict guidelines to determine if digital goods customers are legit. Because fraud is so prevalent, e-tailers have tightened up security protocols more than ever and given rise to false positives. Wrongly identifying customers is the fast track to churn and lost revenue.

Lack of data. For merchants with an outdated fraud stack, or one that is based solely on transaction records or static data, digital goods purchases don’t provide nearly enough real-time data to accurately verify a transaction. No phone number, no physical address, no transaction history. Seasoned bad actors can easily deploy their arsenal of stolen or fake emails, combined with stolen credit card numbers, and mask their identities via proxy servers.

Whether a company is selling physical or digital products online, today’s lofty user experience (UX) expectations remain intact. Many customers are too impatient for passwords and confirmation emails and one-time passcodes (OTP). UX potholes typically lead to reputational blemishes and churn. E-gift cards, one of the more popular digital purchases, can cause similar damage if fraudsters get a hold of them.

E-gift cards leading the pack

E-gift cards, arguably, are public enemy number one in the war against digital goods fraud. In-store gift cards and e-gift card sales are expected to exceed $238 million by 2025. Given our increasingly online world and the growing consumer need for instant gratification, e-gift cards will likely comprise most of these sales. And, just as likely, a significant portion of e-gift card “customers” will be fraudsters.

Fraudsters purchase online gift cards using stolen payment info. More often than not their intention is to either resell the gift card for profit, or use it to launder money by using a stolen credit card to buy a legitimate debit card or merchandise gift card. E-gift cards are a win-win-win for bad actors: they are easy to obtain, easy to redeem, and practically untraceable since they require little to no personal data at checkout (usually just payment details and an email). Armed with their new legitimate digital card, a fraudster can enter a store and purchase a high-ticket product such as a flat screen TV or computer with no chance that the transaction will fail at checkout. 

All digital goods fraud is bad news for e-tailers, but the effects of e-gift card fraud are particularly devastating. Chargebacks, customer experience issues, lost inventory. If consumers buy online gift cards from fraudsters and redeem them, companies, in most cases, will have to eat the cost of these transactions.

And don’t forget about the reputational impact. Similar to the fallout from false positives, victims of online gift card fraud, none too pleased with the merchant, will probably take their future business elsewhere.

Fraud detection in a flash

Unfortunately for online merchants, fraud risk, and the demand for digital goods and instant gratification, have reached an all-time high.

Fortunately for online merchants, the real-time data needed to thwart digital goods fraud, and approve genuine customers, is not the stuff of make-believe.

Deduce equips e-commerce vendors with the breadth of real-time, multi-dimensional identity intelligence and speed to approve or deny digital goods transactions at scale. When e-tailers layer Deduce on top of their existing fraud stack, they’re tapping into a network of 660 million US privacy-compliant identity profiles and 1.5 billion daily user activity events across 150,000+ websites and apps. Unlike transaction-specific identity data, Deduce captures a wide range of online activity (banking, gaming, communicating, browsing, shopping, etc.) for every identity. In many cases Deduce will have seen the customer within hours of their checkout and will know if the identity is legitimate or fraudulent. Stripped-down as digital goods purchases are—no phone number, no address, etc.—Deduce’s network still possesses enough real-time and historical data to determine if a user is the real deal.

To date, Deduce has seen close to 90% of customers within its network. So, if Deduce doesn’t recognize an identity, chances are that prospective digital goods customer is up to no good—or, at the very least, should be given a closer look. Multiple customers confirm that fraud is 10 times more likely if Deduce has not previously seen an identity on the network. On the other hand, Deduce’s real-time data ensures good customers are identified as such, and don’t fall victim to rigid fraud guidelines.

Real-time data, sourced from the largest identity graph in the US, is the only viable means of fighting digital goods fraud and keeping legitimate customers happy.

Contact us today and see why there’s no time like real-time.

A preemptive and UX-friendly approach to credit funnel optimization

It’s one thing to Know Your Customer; it’s another to Know Your Con-Artist. KYC checks, ostensibly, prevent banks from doing business with bad actors, but doing so requires neutralizing fraudsters at the point of entry, before they’re able to apply for a loan.

In other words: early bird gets the fraudster.

A preemptive strategy is the only realistic way to effectively prevent credit application fraud—when a fraudster submits personally identifiable information (PII) to apply for credit (credit card, loan, etc.). This approach saves banks from running costly, unnecessary credit checks on fraudsters, and ensures genuine customers are identified up front and not wrongfully declined. It also curbs the risk of fraudsters slipping through the credit application process scot-free. In a 2018 study, one major North American bank issued 1,400 credit cards per month to fraudsters—a loss of ~$500,000 per month.

But is spotting fraud pre-credit application, before the verification stage, even feasible?

A never-ending money hole

Before we discuss the practicality of shutting down fraudsters pre-credit application, let’s look at the two glaring downsides of not adopting this approach. Problem number one: credit application fraud can be a significant money pit (and time suck) for banks.

Factoring in the cost of running a credit application through third party sources—namely, multiple credit bureaus—can cost between $3-5 per application. The fraudster may then be asked to verify their document, a fabricated driver’s license matching their details, which costs the bank another $3-4 per applicant. Manual review alone can cost another $50-75. And, since synthetic identity fraud is now the largest form of identity fraud in the country, there’s a good chance banks could be chasing a made-up, nonexistent entity.

Synthetic identity fraudsters, whose fake identities are stitched together using bits and pieces from real identities, exploit the very processes that banks and fraud solutions rely on. For example, most banks look for static PII data such as a social security number or date of birth when analyzing credit applications, which is easily obtainable from the dark web. Additionally, synthetic fraudsters will often apply for credit with two lenders to compensate for the identity’s lack of credit history. Ironically, the first lender’s rejection of credit will usually initiate a credit file that enables the second credit application to go through. Low credit limits? Synthetics can work around that, too. A few small transactions here and there, paid off at the end of the month, and they can steadily increase their spending limit until it’s worthwhile to cash out.

A churn for the worst

Not detecting fraud until after the credit application process lets more fraudsters in. It also keeps more good users out.

For instance, geography is a common false positive trigger if a user has recently moved. After this user fills in their basic info, including their address, and creates an account, you can almost guarantee a red flag from the credit bureau. The new address doesn’t match what’s on file. Next step? Document verification. And if users are still around at that point, banks should count their lucky stars.

Legitimate users with thin files are the most likely to get declined. “Thin file” refers to applicants whose credit history is so sparse that standard fraud prevention tools lack the data to calculate risk. A thin file applicant might be a student applying for their first credit card. Other examples include immigrants without credit history in the US; consumers who haven’t used credit in a long time; and people who predominantly use cash over credit.

According to an Experian report, about 62 million Americans have a thin file.

Unlike synthetic fraudsters, who are cunning enough to establish a semblance of credit history by applying to multiple lenders, genuine identities with thin files are often automatically declined. Many of these rejected users will apply to another bank, resulting in churn and lost revenue. Even worse, a substantial amount of unfair declines could harm a bank’s reputation over the long term.

It starts at the top

We’ve established that preventing credit application fraud and false positive declines isn’t tenable unless banks act before applicants apply for credit. But rearranging the UX and security for the credit application process isn’t entirely an in-house operation. It requires assistance from a powerful and highly intelligent first line of defense, with a data stack that rivals the FAANG gang.

Deduce’s real-time identity network fits the description: 660 million US privacy-compliant identity profiles and 1.5 billion daily user events across 150,000+ websites and apps. With this magnitude of data powering their credit app fraud prevention efforts, banks can identify fraudsters and legitimate users pre-credit application, effectively bridging security and UX.

Deduce’s approach to preventing credit application fraud

As illustrated in the graphic above, if the Deduce Identity Network deems the user a fraudster, they’re sent to a landing page devoid of a loan or credit application option; if the user is legit, they’re presented with a list of loan or credit options that fit their needs. This is credit funnel optimization done right.

It’s no wonder that some leading financial institutions, such as SoFi, have adopted this preemptive, highly optimized approach to their credit application journeys. Aside from thwarting fraudsters and false positives, and improving conversion rates, checking for fraud upfront assists marketing efforts. If a new user is determined to be genuine but rejected because of their credit score, the initial collection of their contact info allows banks to keep in touch. That way, users aren’t lost in the sauce and can reapply in the future once their credit score reaches the required threshold.

SoFi’s signup page

There’s no better way to shut down credit app fraudsters who’ve grown accustomed to banks’ antifraud processes. Preventing false positives and salvaging quality customers is vital in its own right, and may prove even more so in the grand scheme of things. By placing Deduce at the forefront of your credit app fraud strategy, the marriage of security and UX is indeed possible, and bottom lines will be all the better for it.

Ready to shut the door on credit application fraud? Contact us today and get up and running in a few hours.

How do you verify a customer you’ve never seen before?

Spring is almost here. Sunny days will soon be upon us and it feels like the pandemic is finally tailing off. Seems the perfect time for fraudsters to rain on everyone’s parade and, boy, are they ever!

E-commerce fraud surpassed $41 billion in 2022. A recent bombshell report from the Federal Trade Commission found that fraudsters duped consumers out of $8.8 billion last year, a 44% increase from 2021. Additionally, the volume of phishing attacks grew by more than 175% over the past two years.

Fraud is surging across the board, but one type of fraud—to the chagrin of online merchants—is really riding the wave: first-checkout fraud.

Here is the skinny on what first-checkout fraud is, what makes it extra difficult to stop, and how businesses can solve for the unique problem it presents.

An overlooked threat

Deduce estimates that 71% of fraudulent transactions occur at first checkout or guest checkout. It’s not neuroscience. How are merchants supposed to approve or reject a transaction from a new “customer” with no payment history or outdated static PII data? It’s like playing in the Super Bowl and not having a scouting report on the other team’s players, zero intel on who’s who, their behaviors and tendencies. You can probably guess what team our money is on.

If a purchase involves a physical product, merchants can, of course, reject a first-checkout customer or refer the transaction to manual review. Not enough data? Why not err on the side of caution? But doing so doesn’t equal playing it safe. Sure, you might stop a potentially bogus transaction; however, this runs the risk of a false positive—rejecting a legitimate customer who isn’t likely to return.

Even if a merchant has access to PII data they can cross-reference, such as address and phone number, they’re unlikely to deploy 2FA for mobile shoppers as this added friction is a leading cause of cart abandonment. For similar reasons, thorough phone number verification, in particular, is unlikely if a user is transacting on a home wifi connection (static IP) or shopping on their laptop or tablet. In this scenario, fraudsters who possess a stolen phone number would get the green light as long as the bill-to and ship-to addresses match, though cunning fraudsters—if they aren’t buying digital goods—will opt for curbside pickup, have a product sent to their local FedEx, or contact the shipper and re-route to a different address.

In 2022, new account fraud, which also thrives on a fraudulent user’s anonymity, sucked 6% of revenue from almost 70% of businesses. It’s not far-fetched to believe first-checkout fraud is on a similar trajectory, especially with every kind of fraud having a field day (par for the course amid an economic downturn).

Synthetic shoppers

Another concern for e-tailers struggling with first-checkout fraud is the emergence of “synthetic shoppers.” Synthetic identity fraud, identified by the Federal Reserve as the fastest-growing form of financial fraud in the US, has been wreaking havoc in the online shopping world thanks to readily available PII data on the dark web.

Normally, a synthetic fraudster would stitch together a new “Frankenstein identity” using a combination of someone’s real name and phone number, among other PII, then establish a payment history to build credibility (they may even buy an aged email address). But a synthetic shopper merely needs stolen credit card credentials and a shopper profile they can easily auto-fill with a bot. They can start swindling e-commerce merchants in minutes.

At the core of all of this, though, is the identity intelligence issue. Trying to prevent online fraud with static data alone is problematic in its own right; first-checkout fraud is even more concerning because there’s zero historical purchase history data to go off of. What’s the fix?

The real-time remedy

To successfully combat first-checkout fraud, online merchants need real-time data. In fact, they need a stockpile of real-time identity intelligence only the upper crust of companies have at their disposal.

That sounds like a tall order, but Deduce can make up the height differential.

By tapping into our network of 680 million US privacy-compliant identity profiles and 1.5 billion daily authenticated user events across 150,000+ websites and apps, Deduce packs enough real-time insights to thwart (or approve) even the most nascent of shoppers. Typically, 89% of your new customers are already familiar to Deduce; 43% of these users have been seen by Deduce just hours before they make their first purchase. 

Deduce leverages real-time data such as device type and ID, geospatial info, and more to ascertain if a first-time buyer is purchasing that flat screen TV or designer handbag legitimately. If a shopper is legitimate, our geospatial intelligence alone can mitigate common false positive drivers that plague first-checkout and returning shoppers alike. For instance, a customer with mismatched bill-to and ship-to addresses won’t be declined if Deduce sees prior history for the buying identity at the ship-to location.

Geospatial identity intelligence at scale helps approve more legitimate transactions and lower operational costs while capturing incremental fraud.

In totality, Deduce’s real-time insights check for email familiarity and ensure a customer’s email and IP, geography and IP, and user history and address match—all at a massive scale. This arms merchants with high-confidence trust scores for first-time and guest checkout shoppers, enabling them to approve more transactions while reducing step-up and manual review costs.

Deduce, which integrates seamlessly with a merchant’s existing fraud stack, also determines the legitimacy of a first-checkout shopper without compromising the user experience. After all, a sluggish UX can turn off a potential customer if a wrongly flagged transaction doesn’t do the trick.

The uptick in online shopping has been a key revenue driver for merchants. The $48 billion in e-commerce fraud losses expected this year will have the opposite effect if mitigating first-checkout fraud isn’t prioritized.

Is your first-checkout engine light on? Contact us today and see how our real-time identity intelligence can accurately approve more legitimate transactions and reduce operational costs while rejecting fraudulent first-time purchases.