Why the Gig May Be Up for Synthetic Fraudsters
Synthetic fraud is plaguing the gig economy, sharing economy, & more
Synthetic fraud is plaguing the gig economy, sharing economy, & more
Halloween is still a couple of months away, but synthetic fraudsters are already getting in on the fun by cosplaying as real people.
Social media. Banks. Elections. Universities. Fake profiles and accounts have infiltrated nearly every facet of everyday life. Online fraud is up 20% this year, with stolen and synthetic fraud representing 85% of all cases.
The ubiquity of an Uber or DoorDash makes the gig economy another lucrative target for synthetic fraudsters. In fact, one in three users is a victim of fraud on gig economy apps. Other on-demand, app-based services—such as dating or home-sharing apps—are vulnerable as well.
What do these schemes look like across the gig, sharing, and dating app economies? Even by today’s standards, the scale and ingenuity behind such operations is impressive, but are these pesky synthetic fraudsters untouchable? Not quite.
Rideshare mafiosos
A recent WIRED feature revealed just how susceptible an Uber or Lyft is to synthetic fraud, assuming one possesses the drive, creativity, and even a novice grasp of technology.
Priscila Barbosa, the protagonist of the WIRED article, embodied these three attributes (and then some). After arriving at JFK International Airport in 2018 with two suitcases, $117 and, crucially, an iPhone, the Brazil native would soon realize her own perverse version of the American Dream—and send shockwaves through the entire gig economy ecosystem.
Capitalizing on loose identity verification protocols, Barbosa and her team of associates made a cushy living stealing identities from the dark web, sometimes combining real and fake PII (Personally Identifiable Information) to spawn synthetic “Frankenstein” identities. Barbosa used these identities to create fake Uber accounts she’d then rent out for a fee. Barbosa made over $10K per month loaning accounts to non-citizens who lacked the ID necessary to drive for Uber, including profits earned from driver referral bonuses.
When rideshare apps beefed up their verification processes and asked drivers to sign up in person, Barbosa found another way in, or moved on to other services like DoorDash and Instacart. Barbosa’s knack for improvisation was impressive, as was her deft usage of bots, GPS spoofing, and photo editing apps to avoid detection and forge documents.
By the time the FBI caught up to Barbosa and her “rideshare mafia,” she’d netted almost $800K in less than three years. Ostensibly, an EECS degree would be table stakes for such a large-scale operation but, as Barbosa showed, all that’s needed is a smartphone and a dream.
Synthetic landlords
The sharing economy faces its own synthetic crisis, perhaps most notably with home rental services like Airbnb and VRBO. Fake users, posing as landlords or property owners, are cashing in on fraudulent listings of properties, infuriating unsuspecting travelers and the rightful owners of said properties.
Surely, no one envies the poor woman in London whose home was listed on Booking.com—unbeknownst to her—and rented by tourists who, upon arrival, quickly found out they’d been duped. And this went on for weeks!
For its part, Airbnb has tried to stem the fake listing epidemic. Last year, Airbnb deleted 59K fake listings and stopped 157K from joining the app, even incorporating AI to bolster the verification process.
Little did Airbnb (and VRBO) know, their biggest rental scam yet would hit the newswire just a few months later. 10K fake Airbnb and VRBO reservations across 10 states. The damage: $8.5M. Indicted in January 2024, the two perpetrators steadily built their illegitimate short-term home rental business over the course of a few years, listing properties across the country under fake host names and stolen identities. Two people executing a con of this scale speaks to the intelligence of synthetic fraudsters—and the AI tools augmenting their efforts.
Love at first deepfake
Synthetic fraudsters are also running roughshod in the dating app world. Users seeking a love connection are falling for fake profiles and, in many cases, sending substantial amounts of money to their fraudulent “admirers.”
In 2023, there were more than 64K romance scams in the US, with total losses reaching $1.14B—and these numbers are conservative given that victims may be embarrassed to come forward. Dating apps are especially trying to appease Gen Z female users who are jumping ship. A Bumble survey found that nearly half of their female users feared fake profiles and scams on the platform.
Dating app cons, including schemes such as catfishing and pig butchering, are easily executed by synthetic fraudsters equipped with Gen AI. Deploying fake dating app profiles en masse? Enlisting automated, humanlike chatbots to seduce victims? Deepfaking audio and video, plus AI-generated profile pictures that can’t be reverse-image-searched via Google? It’s all possible with Gen AI, making synthetic fraudsters appear legitimate even to discerning users.
Just how many fakes are there on dating apps? The recent Netflix documentary Ashley Madison: Sex, Lies & Scandal revealed that 60% of the profiles on Ashley Madison app are bogus. Suddenly, blind dates with friends of friends don’t sound all that bad…
The gig is up
Considering the low barrier for entry and the democratization of Gen AI, among other factors, it might appear the deck is stacked against companies battling synthetic fraudsters, especially for smaller businesses not named Uber or Airbnb.
But renewed hope lies in a novel approach: catching these fake identities early in the account creation workflow. In fact, preemptive detection is the only way to neutralize AI-driven stolen and synthetic identities. Why? Because once these accounts are created, it’s essentially curtains for fraud prevention teams—too late in the game to distinguish the human-like behaviors of synthetics from their truly human counterparts.
Pre-account creation, on the other hand, allows for a bird’s-eye view that analyzes identities as a group rather than one by one. Verifying identities individually, i.e., the more traditional strategy, won’t cut it with synthetic and AI-driven stolen identities, but collective verification reveals signs of fraud that otherwise would go undetected.
For example, if multiple identities perform the same activity on the same website or app at the exact same time every week, something is likely afoot. To avoid a possible false positive, cross-referencing against trust signals like device, network, geolocation, and more assures fraud teams that flagging is the right move.
When tens of thousands (or more) of AI-powered, synthetic identities are storming account creation workflows, the preemptive, bird’s-eye approach is as fool-proof as it gets. The alternative: churn, lost revenue, and potentially a PR nightmare.
Does singling out synthetic accounts require a gargantuan chunk of real-time identity intelligence, on par with that of the “FAANG gang”? Yes. Is accessing this much data even possible? Believe it or not, also yes.
The Deduce Identity Graph packs the requisite real-time identity data to confidently deem an account real or fake, underscored by a trust score that is 99.5% accurate. This gives businesses of all sizes much more than a fighting chance. And for rideshare mafias, fake Airbnb landlords, and dating app swindlers, the gig may finally be up.