Synthetic fraud is plaguing the gig economy, sharing economy, & more

Halloween is still a couple of months away, but synthetic fraudsters are already getting in on the fun by cosplaying as real people.

Social media. Banks. Elections. Universities. Fake profiles and accounts have infiltrated nearly every facet of everyday life. Online fraud is up 20% this year, with stolen and synthetic fraud representing 85% of all cases.

The ubiquity of an Uber or DoorDash makes the gig economy another lucrative target for synthetic fraudsters. In fact, one in three users is a victim of fraud on gig economy apps. Other on-demand, app-based services—such as dating or home-sharing apps—are vulnerable as well.

What do these schemes look like across the gig, sharing, and dating app economies? Even by today’s standards, the scale and ingenuity behind such operations is impressive, but are these pesky synthetic fraudsters untouchable? Not quite.

Rideshare mafiosos

A recent WIRED feature revealed just how susceptible an Uber or Lyft is to synthetic fraud, assuming one possesses the drive, creativity, and even a novice grasp of technology.

Priscila Barbosa, the protagonist of the WIRED article, embodied these three attributes (and then some). After arriving at JFK International Airport in 2018 with two suitcases, $117 and, crucially, an iPhone, the Brazil native would soon realize her own perverse version of the American Dream—and send shockwaves through the entire gig economy ecosystem.

Capitalizing on loose identity verification protocols, Barbosa and her team of associates made a cushy living stealing identities from the dark web, sometimes combining real and fake PII (Personally Identifiable Information) to spawn synthetic “Frankenstein” identities. Barbosa used these identities to create fake Uber accounts she’d then rent out for a fee. Barbosa made over $10K per month loaning accounts to non-citizens who lacked the ID necessary to drive for Uber, including profits earned from driver referral bonuses.

When rideshare apps beefed up their verification processes and asked drivers to sign up in person, Barbosa found another way in, or moved on to other services like DoorDash and Instacart. Barbosa’s knack for improvisation was impressive, as was her deft usage of bots, GPS spoofing, and photo editing apps to avoid detection and forge documents.

By the time the FBI caught up to Barbosa and her “rideshare mafia,” she’d netted almost $800K in less than three years. Ostensibly, an EECS degree would be table stakes for such a large-scale operation but, as Barbosa showed, all that’s needed is a smartphone and a dream.

Synthetic landlords

The sharing economy faces its own synthetic crisis, perhaps most notably with home rental services like Airbnb and VRBO. Fake users, posing as landlords or property owners, are cashing in on fraudulent listings of properties, infuriating unsuspecting travelers and the rightful owners of said properties.

Surely, no one envies the poor woman in London whose home was listed on Booking.com—unbeknownst to her—and rented by tourists who, upon arrival, quickly found out they’d been duped. And this went on for weeks!

For its part, Airbnb has tried to stem the fake listing epidemic. Last year, Airbnb deleted 59K fake listings and stopped 157K from joining the app, even incorporating AI to bolster the verification process.

Little did Airbnb (and VRBO) know, their biggest rental scam yet would hit the newswire just a few months later. 10K fake Airbnb and VRBO reservations across 10 states. The damage: $8.5M. Indicted in January 2024, the two perpetrators steadily built their illegitimate short-term home rental business over the course of a few years, listing properties across the country under fake host names and stolen identities. Two people executing a con of this scale speaks to the intelligence of synthetic fraudsters—and the AI tools augmenting their efforts.

Love at first deepfake

Synthetic fraudsters are also running roughshod in the dating app world. Users seeking a love connection are falling for fake profiles and, in many cases, sending substantial amounts of money to their fraudulent “admirers.”

In 2023, there were more than 64K romance scams in the US, with total losses reaching $1.14B—and these numbers are conservative given that victims may be embarrassed to come forward. Dating apps are especially trying to appease Gen Z female users who are jumping ship. A Bumble survey found that nearly half of their female users feared fake profiles and scams on the platform.

Dating app cons, including schemes such as catfishing and pig butchering, are easily executed by synthetic fraudsters equipped with Gen AI. Deploying fake dating app profiles en masse? Enlisting automated, humanlike chatbots to seduce victims? Deepfaking audio and video, plus AI-generated profile pictures that can’t be reverse-image-searched via Google? It’s all possible with Gen AI, making synthetic fraudsters appear legitimate even to discerning users.

Just how many fakes are there on dating apps? The recent Netflix documentary Ashley Madison: Sex, Lies & Scandal revealed that 60% of the profiles on Ashley Madison app are bogus. Suddenly, blind dates with friends of friends don’t sound all that bad…

The gig is up

Considering the low barrier for entry and the democratization of Gen AI, among other factors, it might appear the deck is stacked against companies battling synthetic fraudsters, especially for smaller businesses not named Uber or Airbnb.

But renewed hope lies in a novel approach: catching these fake identities early in the account creation workflow. In fact, preemptive detection is the only way to neutralize AI-driven stolen and synthetic identities. Why? Because once these accounts are created, it’s essentially curtains for fraud prevention teams—too late in the game to distinguish the human-like behaviors of synthetics from their truly human counterparts.

Pre-account creation, on the other hand, allows for a bird’s-eye view that analyzes identities as a group rather than one by one. Verifying identities individually, i.e., the more traditional strategy, won’t cut it with synthetic and AI-driven stolen identities, but collective verification reveals signs of fraud that otherwise would go undetected.

For example, if multiple identities perform the same activity on the same website or app at the exact same time every week, something is likely afoot. To avoid a possible false positive, cross-referencing against trust signals like device, network, geolocation, and more assures fraud teams that flagging is the right move.

When tens of thousands (or more) of AI-powered, synthetic identities are storming account creation workflows, the preemptive, bird’s-eye approach is as fool-proof as it gets. The alternative: churn, lost revenue, and potentially a PR nightmare.

Does singling out synthetic accounts require a gargantuan chunk of real-time identity intelligence, on par with that of the “FAANG gang”? Yes. Is accessing this much data even possible? Believe it or not, also yes.

The Deduce Identity Graph packs the requisite real-time identity data to confidently deem an account real or fake, underscored by a trust score that is 99.5% accurate. This gives businesses of all sizes much more than a fighting chance. And for rideshare mafias, fake Airbnb landlords, and dating app swindlers, the gig may finally be up.

Synthetic fraudsters can’t fake it anymore

No one embraces the aphorism “fake it till you make it” more than a synthetic fraudster.

This burgeoning variety of bad actor combines stolen info, such as a phone number and address, with fake info to create an entirely new (and bogus) identity.

A recent study from Aite-Novarica Group predicted that synthetic identity fraud will jump from $1.8B in 2021 to $2.42B by 2023. It also surveyed a group of top fraud executives who pegged “synthetic identities resulting from application fraud” as one of their most worrisome threats. And, as if the alarm bells weren’t already loud enough, the Federal Reserve put out a video in February to raise awareness about synthetic identity fraud.

Let’s take a closer look at the synthetic fraud landscape thus far in 2022. Then, we’ll show you how Deduce is outflanking the fakers.

Chasing ghosts

Our initial primer on synthetic identity fraud in February cited experts who foresaw an uptick in synthetic attacks in 2022. Three months in, it seems these experts lived up to their reputation as synthetic identities continue to negatively impact myriad industries and the consumer victims it leaves in shambles.

In 2020, financial institutions suffered $20 billion in losses due to synthetic identity fraud. The use cases keep piling up: suspicious auto loan applications (260% increase); Buy Now, Pay Later fraud (66% increase from 2020 to 2021); and synthetic refund fraud, to name a few.

Financial harm to businesses isn’t the only concern. Profits from synthetic identity fraud are also linked to terrorism and human trafficking. Parents even have to protect the financial futures of their young children who may not realize their identity was stolen until after applying for a credit card as an adult. Hacked school databases and social media accounts led to 1.25 million stolen child identities in 2020.

The most frustrating element of synthetic identity fraud for consumers, businesses, and law enforcement is the elusiveness of the perpetrator. Pinpointing the real human behind a “Frankenstein identity” is like chasing a ghost. A mishmash of, say, a random person’s address, another individual’s stolen social security number, and a made-up name, is more than enough to throw investigator’s off the scent. Complicating matters is the patience of synthetic fraudsters who often prefer playing the long game by taking out smaller loans, paying bills on time, and otherwise keeping a low profile

Fraud prevention solutions are tasked with a different set of challenges, namely: how do you stop a synthetic fraudster early, before an attack can take place, and is that even possible?

You can’t fake it

Preemptively stopping synthetic fraudsters in their tracks is indeed possible—if the largest real-time identity graph in the US is at your disposal.

Deduce’s Identity Network is just that. We’re a relatively young company, but our data is clever beyond its years, powered by more than 450 million anonymized US user profiles (many US residents have more than one email) and 1.4 billion daily activities. 

Think of Deduce as the wise old owl who’s seen every fraudster scheme in the book. Our vast database of user profiles and activity successfully prevents synthetic identity fraud for one key reason: it’s too expensive for synthetic fraudsters to fake us out. The amount of websites, diversity of activity, and length of time needed to circumvent our defenses—all using the same device and identity—would be too costly. (Fraudsters are a thrifty bunch.)

Given the patience of synthetic fraudsters and their efforts to legitimize fake identities by opening bank accounts, paying utility bills, etc., the static data traditionally used to prevent breaches isn’t sufficient. Real-time user activity, on the contrary, gives the Deduce intelligence layer the upper hand no matter how many real and fake details they’ve cobbled together.

And, because the Deduce Identity Network offers both risk and trust signals, you’ll combat synthetic bad actors while making sure legitimate users aren’t mistaken as false positives.
If you’re looking for a synthetic antiseptic, contact us today.

Static data alone can’t ward off synthetic fraudsters

The synthetic ascension

In 2021, identity fraud targeting US-based e-tailers made up 30% of all fraud losses. Within that troubling percentage lies an uptick in synthetic identity fraud, in which bad actors fuse stolen data (phone numbers, emails) with fake data to create a bogus identity.

Post-pandemic, fraudsters have feasted on users’ anxiety and increased online activity, phishing login information with very little effort. Given this trend, experts foresee another rise in synthetic identity fraud in 2022, especially in the financial services arena and on platforms that utilize seamless signup and other quick decisions.

With factors like social security number randomization making synthetic “Frankenstein identities” more prevalent, stopping this mish-mashed form of identity fraud is imperative before it festers into a costly and potentially years-long disaster.

Not your average identity fraud

The challenge of preventing synthetic identity fraud lies in its patchwork composition. A synthetic identity pulls together fake and legit info from multiple sources instead of targeting a single consumer victim, making it much more difficult to detect. With no defrauded person to tip off companies, accounts created via synthetic identity can remain active indefinitely like clandestine, money-sucking leeches only to vanish once the on-file credit card maxes out.

Again, there’s no real-life person to trace the account back to, which complicates the identification of synthetic identity fraud, much less the calculation of losses (assuming fraud is circled as the culprit). Unfortunately, differing interpretations of synthetic identity fraud among enterprises can often chalk cases up to credit-related issues, leaving credit lenders and related providers to carry the financial burden.

If need be, synthetic fraudsters can bypass defenses with more than a fake SSN and stolen email. Forget Frankenstein identities — the craftiest of synthetic fraudsters are combining facial features from multiple people with AI to create realistic “Frankenstein faces.” Yet another wire-crossing maneuver that throws traditional fraud prevention solutions off the scent.

The synthetic antiseptic

Old school fraud prevention tools rely on static data such as physical address and device fingerprinting to detect bad actors. This won’t cut it for synthetic identity fraud.

The only way to effectively root out stealthy synthetic fraudsters is to combine static data with live and historical real-time user activity data. By adding this extra layer of real-time intelligence —behavioral biometrics, time of day, location, etc. — there are too many holes for fraudsters to cover up or build an authentic digital “legend” and more than enough information to help companies spot a fraudulent identity.

This is precisely the extra punch Deduce provides. We pack more than 450 million anonymized US profiles and 1.4 billion daily user activities (logins, account creations, checkouts, etc.) from over 150,000 websites and apps into our real-time Identity Network, protecting organizations from financial losses and the other nightmarish side effects of synthetic identity fraud. For example, a solution that’s solely reliant on static data will fall victim to false positives and ultimately turn good customers away, while the Deduce approach is able to contextualize scenarios where a new device or other factor may not be consistent with identity fraud.

Fraudsters can fake a number of different attributes, but nothing they spoof can outsmart the collective intelligence and profile history of the Deduce Network. The breadth and diversity of our data (transactions, social media activity, etc.) is too gargantuan — and too expensive for the average fraudster to circumvent.

Tap into the Deduce Identity Network today and bolster your defense against synthetic identity fraud. Contact us here to get started.