The war between democracy and generative AI rages on

2024 is a big year for democracy. Half of the global population resides in countries that have an election this year, and all eyes, of course, will be on the main event: the grudge match between Joe Biden and Donald Trump.

But running parallel to this frenzied election year is the rapid evolution of artificial intelligence and its ever-growing assortment of applications. Just when we think it can’t get any smarter or more believable, AI leapfrogs our expectations. It’s exciting for users looking to post amusing memes or videos in their slack channels—not so much for politicians. Voters are frustrated, too. Is that audio clip of a candidate’s off-color remarks genuine, or deepfaked malarkey?

The deceptive tactics of generative AI-powered deepfakes and fake humans—and their potential to swing elections—isn’t just fodder for cybersecurity trades. Biden issued an executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” late last year. Several tech giants, including Google, Microsoft, and OpenAI, recently met at the Munich Security Conference and agreed to take “reasonable precautions” in stopping AI from influencing elections.

Executives from leading tech companies gather at the 2024 Munich Security Conference.

But Biden’s executive order doesn’t sufficiently address synthetic fraud, the Munich pact, according to critics, isn’t proactive enough, and fraudsters (especially those with AI at their disposal) are always a step ahead regardless of the countermeasures platforms or the government put in place. Furthermore, the tech companies tasked with hosting and moderating deepfaked content have laid off more than 40K workers. Without a new approach to neutralizing synthetic fraudsters, the fakery will continue to snowball.

Here are the ways in which generative AI is defrauding elections globally, and how a re-tooled approach may help social media and AI platforms fight back.

1-800-ROBO-CALL

Video deepfakes steal most of the headlines, but AI-generated audio is more advanced and democratized (at least until hyper-realistic video offerings like OpenAI’s Sora become widely available). One could even argue that deepfaked audio is more effective in altering elections, especially after a Biden robocall tried to dissuade people from voting in the New Hampshire primary.

Context, or lack thereof, is what makes audio deepfakes tough to recognize. The voters on the other end of the line lack the visual indicators that give video deepfakes away.

This context deficit bolsters the believability of so-called “grandparent” scams as well, in which a fraudster clones the voice of someone who’s close to the victim and convinces them to wire money. Personalization brings credibility. Just as Cameo users can have celebrities record birthday wishes for a loved one, now AI applied to voice or video patterns can have a personality or politician record a custom message.

If you’re in the business of artificially swaying voter sentiment and rigging elections, simply copy the voice of a relative or friend, spew some disinformation about Candidate XYZ or Prop ZYX, and move on to the next robocall.

In February, the FCC banned robocalls that use AI-generated voices. Time will tell if this puts audio deepfakers on hold. (Don’t count on it.)

A picture’s worth a thousand votes

AI image generators are also under the microscope. The Center for Countering Digital Hate, a watchdog group, found that tools like Midjourney and ChatGPT Plus can create deceptive images capable of spreading false political information.

The study, which additionally tested DreamStudio and Microsoft’s Image Creator, was able to create fake election imagery in more than 40% of cases. Midjourney performed significantly worse, generating disinformation 65% of the time—not a huge surprise considering the company didn’t sign the Munich Security Conference pact and only employs 11 team members.

The realistic nature of these images is startling. In March, an AI-generated photo purporting to show black Trump supporters posing with the former president was deemed fraudulent, apparently in an attempt to draw black voters away from the Biden campaign. Several AI-generated and equally bogus images of Trump being arrested also proliferated across social media.

AI-generated political images are incredibly lifelike.

Since the watchdog report, leading AI generators have put guardrails in place. The most obvious move is to disallow prompts involving “Biden” or “Trump.” However, jailbreaking maneuvers can sometimes bypass such controls. For example, instead of typing a candidate’s name, bad actors can key in their defining physical characteristics along with, say, “45th president,” and produce the desired image.

Take political candidates out of the equation. There are still other visuals that can sway voters. How about a fake image of a Trump supporter smashing a ballot box open, or Biden supporters lighting Mar-a-Lago ablaze? Election tampering campaigns don’t always target a specific candidate or political party but rather a divisive issue such as freedom of choice or border control. For instance, images of migrants illegally crossing the Rio Grande or climbing a fence, fake or not, are bound to rile up one group of voters.

A global crisis

International examples of AI-based election interference could portend trouble for the US, but hopefully will inspire technologists and government officials to rethink their cybersecurity approach.

In Slovakia, a key election was tainted by AI-generated audio that mimicked a candidate’s voice saying he had tampered with the election and, worse (for some voters) planned to raise beer prices. Indonesian Gen-Z voters warmed up to a presidential candidate and previously disgraced military general thanks to a cat-loving, “chubby-cheeked” AI-generated image of him. Bad actors in India, meanwhile, are using AI to “resurrect” dead political figures who in turn express their support for those currently in office.

An AI-generated avatar of M Karunanidhi, the deceased leader of India’s DMK party.

The image of the Indonesian presidential candidate is nothing more than a harmless campaign tactic, but are the other two examples the work of election-hacking-as-a-service schemes? Troubling a term as it might be, this is our new democratic reality: hackers contracted to unleash hordes of synthetic identities across social media, spreading false, AI-generated content to influence voter sentiment however they please.

An Israeli election hacking group dubbed “Team Jorge,” which controls over 30K fake social media profiles, meddled in a whopping 33 elections, according to a Guardian report. If similar groups aren’t already threatening elections in the US, they will soon.

The road ahead

Combatting AI-powered election fraud is an uphill battle, and Midjourney CEO David Holz believes the worst is yet to come. “Anybody who’s scared about fake images in 2024 is going to have a hard 2028,” Holz warned during a recent video presentation. “It will be a very different world at that point…Obviously you’re still going to have humans running for president in 2028, but they won’t be purely human anymore.”

What is the answer to this problem, this future Holz sees in which every political candidate has a lifelike “deepfake chatbot” armed with manufactured talking points? Raising public awareness of generative AI’s role in election tampering is important but, ironically, that can also backfire. As more people learn about the complexity and prevalence of deepfaked audio, video, and images, a growing sense of skepticism can hinder their judgment. Known as the “liar’s dividend” concept in political circles, this causes jaded, deepfake-conscious voters to mislabel genuine media as fake. It doesn’t help matters when presidential candidates label mainstream media similarly while publicizing their own view of the world.

Social media and generative AI platforms have their work cut out for them. Neutralizing, much less curbing, AI-powered election fraud pits them against artificial intelligence and synthetic identities that are disturbingly lifelike and nearly undetectable. This includes SuperSynthetic™ “sleeper” identities that can hack elections just as easily as they swindle finservs.

Deepfaked synthetic identities are too smart and real-looking to face head-on. Stopping these slithery fraudsters requires an equally crafty strategy, and a sizable chunk of real-time, multicontextual, activity-backed identity intelligence. Our money is on a “top-down” approach that, prior to account creation, analyzes synthetic identities collectively rather than individually. This bird’s eye view picks up on signature online behaviors of synthetic identities, patterns that rule out coincidence.

The Deduce Identity Graph is monitoring upwards of 30 million synthetic identities in the US alone. Some of these identities will attempt to “hack the vote” come November. Some already are. A high-level approach that examines them as a group—before they can deepfake unsuspecting voters—may be democracy’s best shot.

Celebrities, politicians, and banks face a deepfake dilemma

We’re reaching the “so easy, a caveman can do it” stage of the deepfake epidemic. Fraudsters don’t need a computer science degree to create and deploy armies of fake humans, nor will it drain their checking account (quite the opposite).

As if deepfake technology wasn’t accessible enough, the recent unveiling of OpenAI’s Sora product only simplifies—and complicates—matters. Sora, which for now is only available to certain users, produces photorealistic video scenes from text prompts. Not to be outdone, Alibaba demonstrated their EMO product making the Sora character sing. The lifelike videos created by such deepfake platforms fool even the ritziest of liveness detection solutions.

AI-powered fraud isn’t flying under the radar anymore—the prospect of taxpayers losing upwards of one trillion dollars will do that. One burgeoning scam, known as pig butchering, was featured on an episode of John Oliver. These scams start as a wrong number text message and, over the course of weeks or months, lure recipients into bogus crypto investments. Conversational generative AI tools like ChatGPT, combined with clever social engineering, make pig butchering a persuasive and scalable threat. Accompanying these texts with realistic deepfaked images only bolsters the perceived authenticity.

Companies are taking notice, too. So is the Biden administration, though its executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” in late 2023 didn’t sufficiently address synthetic fraud—specifically cases involving Generative AI and deepfakes.

The damage caused by AI-generated, deepfaked identities continues to worsen. Here is how it has permeated seemingly every facet of our lives, and how banks can stay one step ahead.

Hacking the vote

The 2024 presidential election is shaping up to be quite the spectacle, one that will capture the eyes of the world and, in all likelihood, further sever an already divided populace. Citizens exercising their right to vote is crucial, but the advancement of deepfake technology raises another concern: are voters properly informed?

Election-hacking-as-a-service sounds like the work of dystopian fiction, but it’s just the latest threat politicians and their constituents need to worry about. Highly sophisticated factions—in the US and abroad—are leveraging generative AI and deepfakes to weaponize disinformation and flip elections like flapjacks.

Some election meddlers have changed the outcome of 30+ elections. Remember the deepfaked Biden robocall ahead of the New Hampshire primary? That’s the handiwork of an election hacking superteam. A personalized text message or email might not be from [insert candidate here]. A video portraying an indecent remark could be fabricated. Some voters may say they’re “leaning” towards voting yay or nay on Measure Y or Prop Z, when in actuality they’re being pushed in either direction by synthetic election swingers.

In February, a slew of tech behemoths signed an accord to fight back against AI-generated election hacking. Like Biden’s executive order, the accord is a step in the right direction; time will tell if it pays dividends.

The case of the deepfaked CFO

Deepfaked audio and video is convincing enough to sway voters. It can also dupe multinational financial firms out of $25 million—overnight.

Just ask the Hong Kong finance worker who unknowingly wired about $25.6 million to fraudsters after attending a video conference call with who he thought were his fellow colleagues. A synthetic identity posing as the company’s CFO authorized the transactions—15 total deposits into five accounts—which the worker discovered were fraudulent after checking in with his corporate office.

It appears the bad actors used footage of past video conferences to create the deepfaked identities. Data from WhatsApp and emails helped make the identities look more legitimate, which shows the lengths these deepfaking fraudsters are willing to go.

A couple of years ago, fraudsters would have perpetrated this attack in a simpler fashion, via phishing, for example. But with the promise of bigger paydays, and much less effort and technical knowhow required thanks to the ongoing AI explosion, cyber thieves have every incentive to deepfake companies all the way to the bank.

The Taylor Swift incident

Celebrities, too, are getting a taste of just how destructive deepfakes can be.

Perhaps the most notable (and widely covered) celebrity deepfake incident happened in January when sexually explicit, AI-generated pictures of Taylor Swift popped up on social media. Admins on X/Twitter, where the deepfaked images spread like wildfire, eventually blocked searches for the images but not before they garnered nearly 50 million views.

Pornongraphic celebrity deepfakes aren’t a new phenomenon. As early as 2017, Reddit users were superimposing the faces of popular actresses—such as Scarlett Johansson and Gal Gadot—onto porn performers. But AI technology back then was nowhere near where it is today. Discerning users could spot a poorly rendered face-swap and determine a video or image was fake.

Shortly after the Taylor Swift fiasco, US senators proposed a bill that enables victims of AI-generated deepfakes to sue the videos’ creators—long overdue considering a 2019 report found that non-consensual porn comprised 96 percent of all deepfake videos.

Deepfaking the finservs

Whether it’s hacking elections, spreading pornographic celebrity deepfakes, or posing as a company’s CFO, deepfakes have never been more convincing or dangerous. And, because fraudsters want the most bang for their buck, naturally they’re inclined to attack those with the most bucks: banks, fintech companies, and other financial institutions.

The $25 million CFO deepfake speaks to just how severe these cases can be for finservs, though most deepfaking fraudsters prefer a measured approach that spans weeks or months. Such is the M.O. of  SuperSynthetic™ “sleeper” identities. This newest species of synthetic fraudster is too crafty to settle for a brute-force offensive. Instead, it leverages an aged and geo-located identity that’s intelligent enough to make occasional deposits and interact with a banking website or app for an extended period to appear like a genuine customer.

However, SuperSynthetics achieving their long-awaited goal—accept a credit card or loan offer, cash out, and scram—is contingent on one vital step: passing the onboarding process.

This is where deepfakes come in. During onboarding, SuperSynthetics can deepfake driver’s licenses and other forms of ID, even live video interviews if need be. Given the advancement in deepfake technology, and the unreliability of liveness detection, the only real chance banks have is to stop SuperSynthetic identities before they’re onboarded.

Using a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence, preemptively sniffing out SuperSynthetics is indeed possible. This is the foundation of a “top-down” approach that analyzes synthetic identities collectively—different from the one-by-one approach of the olden days. A bird’s eye view of identities uncovers signature online behaviors and patterns consistent enough to rule out a false positive. Multiple identities depositing money into their checking account every Wednesday at 9:27 p.m.? Something’s afoot.

The top-down approach is the surest and fastest way banks can ferret out synthetic identities and avoid getting deepfaked at the onboarding stage. But the clock is ticking. A study, commissioned by Deduce, found more than 75% of finservs already had synthetic identities in their databases, and 87% had extended credit to fake accounts.

Bank vs. Deepfake clearly isn’t a fair fight. But if banks do their work early, and subsequently avoid deepfakes altogether, their customers, reputations, and bottom lines will be the better for it.

Get ahead, or get left behind

New technology gets the people going. Just ask the folks coughing up a fair sum of cash for an Apple Vision Pro. Sure, these users may look like Splinter Cell operatives with their VR goggles on but, most likely, Apple’s foray into “spatial computing” will take off sooner rather than later.

However, before everyday users and even large enterprises can adopt new technologies, another category of users is way ahead of them: fraudsters. These proactive miscreants adopt the latest tech and find new ways to victimize companies and their customers. Think metaverse and crypto fraud or, most recently, the use of generative AI to create legions of humanlike bots.

Look back through the decades and a clear pattern emerges: new tech = new threat. Phishing, for example, was the offspring of instant messaging and email in the mid-1990s. Even the “advance fee” or “Nigerian Prince” scam we associate with our spam folders originally cropped up in the 1920s due to breakthroughs in physical mail.

What can we learn from studying this troubling pattern? How can businesses adopt the latest wave of nascent technologies while protecting themselves from opportunistic fraudsters? In answering these questions, it’s helpful to examine the major technological advancements of the past 20+ years—and how bad actors capitalized at every step along the way.

The 2000s

The 2000s ushered in digital identities and, by extension, digital identity fraud.

Web 1.0 and the internet had exploded by the early aughts. PCs, e-commerce, and online banking increased the personal data available on the web. As more banks transitioned to online, and digital-only banks emerged, fintech companies like PayPal hit the ground running and online transactions skyrocketed. Fraudsters pounced on the opportunity. Phishing, Trojan horse viruses, credential stuffing, and exploiting weak passwords were among the many tricks that fooled users and led to breaches at notable companies and financial institutions.

An example of a Nigerian Prince or “419” email scam

Phishing scams, in which bogus yet legitimate-looking emails persuade users to click a link and input personal info, took off in the 2000s and are even more effective today. Thanks to AI, including A-based tools like ChatGPT, phishing emails are remarkably sophisticated, targeted, and scalable.

Social media entered the frame in the 2000s, too, which opened a Pandora’s box of online fraud schemes that still persist today. The use of fake profiles provided another avenue for phishing and social engineering that would only widen with the advent of smartphones.

The 2010s

The 2010s were all about the cloud. Companies went gaga over low-cost computing and storage solutions, only to go bonkers (or broke) due to the corresponding rise in bot threats.

By the start of the decade, Google, Microsoft, and AWS were all-in on the cloud. The latter brought serverless computing to the forefront at the 2014 re:Invent conference, and the two other big-tech powerhouses followed suit. Then came the container-sance, the release of Docker and Kubernetes, the mass adoption of DevOps and hybrid and multicloud and so on. But, in addition to their improved portability and faster deployment, containers afforded bad actors (and their bots) another attack surface.

AWS unveils Lambda (and serverless computing) at re:Invent 2014

The rise of containers, cloud-native services, and other cloudy tech in the 2010s led to a boom in innovation, efficiency, and affordability for enterprises—and for fraudsters. Notably, the Mirai botnet tormented global cloud services companies using unprecedented DDoS (distributed denial of service) attacks, and the 3ve botnet accrued $30 million in click-fraud over a five-year span.

Malicious bots had never been cheaper or more scalable, brute force and credential stuffing attacks more seamless and profitable. The next tech breakthrough would catapult bots to another level of deception.

The 2020s

AI has blossomed in the 2020s, especially over the past year, and once again fraudsters have flipped the latest technological craze into a cash cow.

Amid the ongoing AI explosion, bad actors have specifically leveraged Generative AI and self-learning identity personalization to line their pockets. It’s hard to say what’s scarier—how human these bots appear, or how easy it is for novice users to create them. The widespread availability of data and AI’s capacity to teach itself using LLMs (large language models) has spawned humanlike identities at massive scale. Less technical fraudsters can easily build and deploy these identities thanks to tools like WormGPT, otherwise known as “ChatGPT’s malicious cousin.”

SuperSynthetic identities represent the next step in bot evolution

The most nefarious offshoot of AI’s golden age may be SuperSynthetic™ identities. The most humanlike of the synthetic fraud family tree, SuperSynthetics are all about the long con and don’t mind waiting several months to cash out. These identities, which can deepfake their way past account verification if need be, are realistically aged and geo-located with a legit credit history to boot, and they’ll patiently perform the online banking actions of a typical human to build trust and credit worthiness. Once that loan is offered, the SuperSynthetic lands its long-awaited reward. Then it’s on to the next bank.

Like Web 1.0 and cloud computing before it, AI’s superpowers have amplified the capabilities of both companies and the fraudsters who threaten their users, bottom lines and, in some cases, their very existence. This time around, however, the threat is smarter, more lifelike, and much harder to stop.

What now?

There’s undoubtedly a positive correlation between the emergence of technological trends and the growth of digital identity fraud. If a new technology hits the scene, fraudsters will exploit it before companies know what hit them.

Rather than getting ahead of the latest threats, many businesses are employing outdated mitigation strategies that woefully overlook the SuperSynthetic and stolen identities harming their pocketbooks, users, and reputations. Traditional fraud prevention tools scrutinize identities individually, prioritizing static data such as device, email, IP address, SSN, and other PII data. The real solution is to analyze identities collectively, and track dynamic activity data over time. This top-down strategy, with a sizable source of real-time, multicontextual identity intelligence behind it, is the best defense against digital identity fraud’s most recent evolutionary phase.

It’s not that preexisting tools in security stacks aren’t needed; it’s that these tools need help. At last count, the Deduce Identity Graph is tracking nearly 28 million synthetic identities in the US alone, including nearly 830K SuperSynthetic identities (a 10% increase from Q3 2023). If incumbent antifraud systems aren’t fortified, and companies continue to look at identities on a one-to-one basis, AI-generated bots will keep slipping through the cracks.

New threats require new thinking. Twenty years ago phishing scams topped the fraudulent food chain. In 2024 AI-generated bots rule the roost. The ultimatum for businesses remains the same: get ahead, or get left behind.

Synthetic customers are there, even if you don’t see them

There’s no denying that customer data platforms (CDPs) are a must-have tool for today’s companies. Consolidating customer data into one location is much more manageable. Aside from data privacy considerations—particularly in finance and healthcare—a CDP’s organized, streamlined view of customer data activates personalized user experiences and offers for existing customers while accurately identifying prospective customers who are most likely to drive revenue.

But synthetic fraud, which now accounts for 85% of all identity fraud, is infesting the tidiest and most closely monitored of CDPs. Most CDPs scan for telltale signs of fraud in real-time; however, synthetic fraudsters are too smart for that. The ubiquity of AI, and its ever-growing intelligence, enables bad actors to create and manipulate synthetic identities that appear more human than ever. The signs of fraud aren’t so obvious anymore, and the cybersecurity tools used by many companies aren’t up to snuff.

Effectively stomping out synthetic identity fraud requires an obsessive degree of CDP hygiene. This, of course, isn’t possible without a thorough understanding of what synthetic identities are capable of, how they operate, and the strategy companies must adopt to neutralize them.

Silent killers

No intelligence agency wants to readily admit it’s been infiltrated by a spy, and no CEO is exactly chomping at the bit to admit their company’s customer database is crawling with fake customers. When PayPal’s then-CEO, Dan Schulman, admitted to over 4 million fake customers it cost the fintech company over 25% in market capitalization. But these fraudsters are indeed there, camped out in CDPs and operating like legitimate customers—deposits, withdrawals, credit services, the whole nine.

A recent Wakefield report surveyed 500 senior fraud and risk professionals from the US. More than 75% of these executives said they had synthetic customers. Half of respondents deemed their company’s synthetic fraud prevention efforts somewhat effective, at best.

Perhaps most troubling? 87% of these companies admitted to extending credit to synthetic customers, and 53% of the time credit was extended proactively, via a marketing campaign, to the fraudster. These fraudsters aren’t just incredibly human-like and patient—they’re in it for the big haul. And according to the FTC’s 2022 report on identify fraud, the per-incident financial impact is in excess of $15K. 

Synthetic Sleeper identities, as we call them, can remain in CDPs for months, in some cases over a year. They deposit small amounts of money here and there while interacting with the website or mobile app like a real customer would. Once their credit worthiness gets a bump, and they qualify for a loan or line of credit, pay day is imminent. The fraudster performs a “bust-out,” or “hit-and-run.” The money is spent, and the bank is left with uncollectible debt.

This is not your grandmother’s synthetic identity. Such intelligence and cunning is the handiwork of synthetic fraud’s latest iteration: the SuperSynthetic™ identity.

SuperSynthetic, super slippery

How are synthetic fraudsters turning CDPs into their own personal clubhouses? Look no further than SuperSynthetic identities. The malevolent offspring of the ongoing generative AI explosion, SuperSynthetics are growing exponentially. In Deduce’s most recent Index, 828,095 SuperSynthetic identities are being tracked in the identity graph. These are hitting companies, especially banks, with costly smash-and-grabs at an unprecedented rate.

SuperSynthetics aren’t high on style points, but why opt for a brute force approach if you don’t need to? These methodical fraudsters are more than content playing the long game. Covering all of their bases allows for such patience—their credit history is legit; their identity is realistically aged and geo-located; and, for good measure, they can deepfake their way past selfie, video, or document verification.

Even the sharpest of real-time fraud detection solutions are unlikely to catch a SuperSynthetic. The usual hallmarks—an IP address or credit card being used for multiple accounts, behavioral changes over time—aren’t present. A SuperSynthetic is far too pedestrian to raise eyebrows, depositing meager dollar amounts over several months, regularly checking its account balance, paying bills and otherwise transacting innocuously until, finally, its reputation earns a credit card or loan offer.

Once the loan is transferred, or the credit card is acquired, it’s sayonara. The identity cashes out and moves on to the next bank. After all, the fraudster does not care about their credit score for that identity, one of dozens or hundreds they are manipulating. It has done its job and will be sacrificed for a highly profitable return.

Fake identities, real problems

Deduce estimates that 3-5% of financial services and fintech new accounts onboarded within the past two years are SuperSynthetic identities. Failing to detect these sleeper identities in a CDP hurts companies in a multitude of ways, all of which tie back to the bottom line.

Per the Wakefield report, 20% of senior US fraud and risk execs say synthetic fraud incidents rack up between $50K-$100K per incident. 23% put the number at $100K+. The low end of this range sitting at a whopping $50K should be alarming enough to reconsider preemptive counter measures against CDP breaches.

Another downside of synthetic infiltration is algorithm poisoning. Since the data for synthetic “customers” is inherently fake, this skews the models that drive credit decisioning. Risky applicants can be mistakenly offered loans, or vice versa. For banks, financial losses from algorithm poisoning are two-fold: erroneously extending credit to fake or unworthy customers; and bungling opportunities to extend credit to the right customers.

A signature approach

The good news for financial services organizations (and their CDPs) is the battle against synthetic, and even SuperSynthetic, identities is not a futile one. The same strategy that’s effective in singling out synthetic identities pre-NAO (New Account Opening) can help spot synthetics that have already breached CDPs.

Even if a SuperSynthetic has already bypassed fraud detection at the account opening stage, gathering identity activity from before, during, and after the NAO workflow and analyzing identities collectively, rather than one-by-one, unearths SuperSynthetic behavioral patterns.

Traditional fraud prevention tools take an individualistic approach, doubling down on static data such as device, email, IP address, for singular identities. But catching synthetic fraudsters, pre- or post-NAO, calls for tracking dynamic activity data over time. At a high level (literally), this translates to a top-down, or “birdseye,” strategy—powered by an enormous and scalable source of real-time, multicontextual identity intelligence—that verifies identities as a group or signature. Any other plan of attack is doubtful to pick up the synthetic scent.

Per the slide above, a unique activity-backed data set augments the data from a CDP and fraud platform to ferret out synthetic accounts. To catch these slithery fraudsters more data can and should be deployed. Knowing how an identity behaved online prior to becoming a customer bolsters the data science models used to give CDPs a synthetic spring cleaning.

What does this look like in practice? Say a real-time scan of in-app customer activity reveals, over an extended period, that multiple identities check their account balance every Thursday at exactly 8:17 a.m. Patterns such as this rule out coincidence and uncover the otherwise clandestine footprints of SuperSynthetic identities.

The intelligence and elusiveness of SuperSynthetics are increasing at a breakneck pace. In addition to terrorizing CDPs, SuperSynthetics have the potential to peddle sports betting accounts, carry out financial aid scams, and even swing the stock market via disinformation campaigns. Given what’s at stake, not combating SuperSynthetics with a thorough activity-driven approach, for some companies, might spell serious trouble in the year ahead.

College students are lifelong learners. So are AI-powered fraudsters.

With each passing day AI grows more powerful and more accessible. This gives fraudsters the upper hand, at least for now, as they roll out legions of AI-powered fake humans that even governmental countermeasures—such as the Biden administration’s recent executive order—will be lucky to slow down.

Among other nefarious activities, bad actors are leveraging AI to peddle synthetic bank and online sports betting accounts, swing elections, and spread disinformation. They’re also fooling banks with another clever gimmick: posing as college freshmen.

College students, particularly underclassmen, have long been a target demographic for banks. Fraudsters are well aware and know that banks’ yearning for customer acquisition, coupled with their inadequate fraud prevention tools, present an easy cash-grab opportunity (and, perhaps, a chance to revisit their collegiate years).

Early bank gets the bullion

The appeal of a new college student from a customer acquisition perspective can’t be understated.

A young, impressionable kid is striking out on their own for the first time. They need a credit card to pay for both necessary and unnecessary things (mostly the latter). They need a bank. And their relationship with that bank? There’s a good chance it will outlast most of their romantic relationships.

This could be their bank through college, through their working years, the bank they procure a loan from for their first house, the bank they encourage their kids and grandkids to bank with. In a college freshman banks don’t just land one client, but potentially an entire generation of clients. Lifetime value up the wazoo.

Go to any college move-in day and you’ll spot bank employees at tables, using giveaway gimmicks to attract students to open up new accounts. According to the Consumer Financial Protection Bureau, 40% of students attend a college that’s contractually linked to a specific bank. However, as banks shovel out millions so they can market their products at universities, a fleet of synthetic college freshmen lie in wait, with the potential to collectively steal millions of their own.

Playing the part

Today’s fraudsters are master identity-stealers who can dress up synthetic identities to match any persona.

In the case of a fake college freshman, building the profile starts off in familiar fashion: snagging a dormant social security number (SSN) that’s never been used or hasn’t been used in a while. Like many forms of Personally Identifiable Information (PII), stolen SSNs from infants or deceased individuals are readily available on the dark web.

From here, fraudsters can string together a combination of stolen and made-up PII to create a synthetic college freshman identity that qualifies for a student credit card. No branch visit necessary, and IDs can be deepfaked. The synthetic identity makes small purchases and pays them off on time—food, textbooks, phone bill—building trust with the bank and improving their already respectable credit score of around 700. They might sign up for an alumni organization and/or apply for a Pell Grant to further solidify their collegiate status.

Pell Grants, of course, require admission to a college—a process that, similar to acquiring a credit card from a bank, is easy pickings for synthetic fraudsters.

The ghost student epidemic

Any bank that doesn’t take the synthetic college freshman use case seriously should study the so-called “ghost student” phenomenon: fake college enrollees that rob universities of millions. 

In California, these synthetic students, who employ the same playbook as bank-swindling synthetics, comprise 20% of community college applications alone (more than 460K). Thanks to an increased adoption of online enrollment and learning post-pandemic, relaxed verification protocols for household income, and the proliferation of AI-powered fake identities, ghost students can easily grab federal aid and never have to attend class.

Like ghost students, synthetic college freshmen can apply for a credit card without ever stepping foot inside a bank branch. Online identity verification is a breeze for the seasoned bad actor. Given the democratization of powerful generative AI tools, ID cards and even live video interviews over Zoom or another video client can be deepfaked.

A (SuperSynthetic) tale old as time

Both the fake freshmen and ghost student problems are symptomatic of a larger issue: SuperSynthetic™ identities.

SuperSynthetic bots are the most sophisticated yet. Forget the brute force attacks of yore; SuperSynthetics are incredibly lifelike and patient. These identities play nice for several months or even years, building trust by paying off credit card transactions on time and otherwise interacting like a real human would. But, once the bank offers a loan and a big payday is in sight, that SuperSynthetic is out the door.

An unorthodox threat like a SuperSynthetic identity can’t be thwarted by traditional fraud prevention tools. Solutions reliant on individualistic, static data won’t cut it. Instead, banks (and universities, in the case of ghost students) need a solution powered by scalable and dynamic real-time data. The latter approach verifies identities as a group or signature: the only way to pick up on the digital footprints left behind by SuperSynthetics.

As human as SuperSynthetic identities are, they aren’t completely infallible. With a “birds eye” view of identities, patterns of activities—such as SuperSynthetics commenting on the same website at the exact same time every week over an extended period—quickly emerge.

Fake college students are one of the many SuperSynthetic personas capable of tormenting banks. But it isn’t the uphill battle it appears to be. If banks change their fraud prevention philosophy and adopt a dynamic, birds eye approach, they can school SuperSynthetics in their own right.

Synthetic fraud remains the elephant in the room

The Biden administration’s recent executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” naturally caused quite a stir among the AI talking heads. The security community also joined the dialog and expressed varying degrees of confidence in the executive order’s ability to protect the federal government and private sector against bad actors.

Clearly, any significant effort to enforce responsible and ethical AI use is a step in the right direction, but this executive order isn’t without its shortcomings. Most notable is its inadequate plan of attack against synthetic fraudsters—specifically those created by Generative AI.

With online fraud reaching a record $3.56 billion through the first half of 2022 alone, financial institutions are an obvious target of AI-based synthetic identities. A Wakefield report commissioned by Deduce found that 76% of US banks have synthetic accounts in their database, and a whopping 86% have extended credit to synthetic “customers.”

However, the shortsightedness of the executive order also carries with it a number of social and political ramifications that stretch far beyond dollars and cents.

Missing the (water)mark

A key element of Biden’s executive order is the implementation of a watermarking system to differentiate between content created by humans and AI, a topical development in the wake of the SAG-AFTRA strike and the broader artist-versus-AI clash. Establishing provenance of an object via a digital image or signature would seem like a sensible enough solution to identifying AI-generated content and synthetic fraud, that is, if all of the watermarking mechanisms currently at our disposal weren’t utterly unreliable.

A University of Maryland professor, Soheil Feizi, as well as researchers at Carnegie Mellon and UC Santa Barbara, circumvented watermarking verification by adding fake imagery. They were able to remove watermarks just as easily.

It’s also worth noting that the watermarking methods laid out in the executive order were developed by big tech. This raises concerns around a walled-garden effect in which these companies are essentially regulating themselves while smaller companies follow their own set of rules. And don’t forget about the fraudsters and hackers who, of course, will gladly continue using unregulated tools to commit AI-powered synthetic fraud, as well as overseas bad actors who are outside US jurisdiction and thus harder to prosecute.

The deepfake dilemma

Another element of many synthetic fraud attacks, deepfake technology, is addressed in the executive order but a clear-cut solution isn’t proposed. Deepfaking is as complex and democratized as ever—and will only grow more so in the coming years—yet the executive order falls short of recommending a plan to continually evolve and keep pace.

Facial recognition verification is employed at the government and state level, but even novice bad actors can use AI to deepfake their way past these tools. Today, anyone can deepfake an image or video with a few taps. Apps like FakeApp can seamlessly integrate someone’s face into an existing video, or generate an entirely new one. As little as a cropped face from a social media image can spawn a speaking, blinking, head-moving entity. Uploaded selfies and live video calls pass with flying colors.

In this era of remote customer onboarding, coinciding with unprecedented access to deepfake tools, it behooves executive orders and other legislation to offer a more concrete solution to deepfakes. Finservs (financial services) companies are in the crosshairs, but so are social media platforms and their users; the latter poses its own litany of dangers.

Synthetic fraud: multitudes of mayhem

The executive order’s watermarking notion and insufficient response to deepfakes don’t squelch the multibillion-dollar synthetic fraud problem.

Synthetic fraudsters still have the upper hand. With Generative AI at their disposal, they can create patient and incredibly lifelike SuperSynthetic™ identities that are extremely difficult to intercept. Worse, “fraud-as-a-service” organizations peddle synthetic mule accounts from major banks, and also sell synthetic accounts on popular sports betting sites—new, aged, geo-located—for as little as $260.

More worrisome, amid the rampant spread of disinformation online, is the potential for synthetic accounts to cause social panic and political upheaval.

Many users struggle to identify AI-generated content on X (formerly Twitter), much less any other platform, and social networks charging a nominal fee to “verify” an account offers synthetic identities a cheap way to appear even more authentic  All it takes is one post shared hundreds of thousands or millions of times for users to mobilize against a person, nation, or ideology. A single doctored image or video could spook investors, incite a riot, or swing an election. 

“Election-hacking-as-a-service” is indeed another frightening offshoot of synthetic fraud, to the chagrin of politicians (or those on the wrong side of it, at least). These fraudsters weaponize their armies of AI-generated social media profiles to sway voters. One outfit in the Middle East interfered in more than 33 elections.

Banks or betting sites, social uprisings or rigged elections, unchecked synthetic fraud, buttressed by AI, will continue to wreak havoc in multitudinous ways if it isn’t combated by an equally intelligent and scalable approach.

The best defense is a good offense

The executive order, albeit an encouraging sign of progress, is too vague in its plan for stopping AI-generated content, deepfakes, and the larger synthetic fraud problem. The programs and tools it says will find and fix security vulnerabilities aren’t clearly identified. What do these look like? How are they better than what’s currently available?

AI-powered threats grow smarter by the second. Verbiage like “advanced cybersecurity program” doesn’t say much; will these fraud prevention tools be continually developed so they’re in lockstep with evolving AI threats? To its credit, the executive order does mention worldwide collaboration in the form of “multilateral and multi-stakeholder engagements,” an important call-out given the global nature of synthetic fraud.

Aside from an international team effort, the overarching and perhaps most vital key to stopping synthetic fraud is an aggressive, proactive philosophy. Stopping AI-generated synthetic and SuperSynthetic identities requires a preemptive, not reactionary, approach. We shouldn’t wait for authenticated—or falsely authenticated—content and identities to show up, but rather stop synthetic fraud well before infiltration can occur. And, given the prevalence of synthetic identities, they should have a watermark all their own.

76% of finservs are victims of synthetic fraud

In 1938, Orson Welles’ infamous radio broadcast of The War of the Worlds convinced thousands of Americans to flee their homes for fear of an alien invasion. More than 80 years later, the public is no less gullible, and technology unfathomable to people living in the 1930s allows fake humans to spread false information, bamboozle banks, and otherwise raise hell with little to no effort.

These fake humans, also known as synthetic identities, are ruining society in myriad ways: tampering with electorate polls and census data, disseminating misleading social media posts with real-world consequences, sharing fake articles on Reddit that subsequently skew Large Language Models that drive platforms such as ChatGPT. And, of course, bad actors can leverage fake identities to steal millions from financial institutions.

The bottom line is this: synthetic fraud is prevalent; financial services companies (finservs), social media platforms, and many other organizations are struggling to keep pace; and the impact, both now and in the future, is frighteningly palpable.

Here is a closer look at how AI-powered synthetic fraud is infiltrating multiple facets of our lives.

Accounts for sale

If you need a new bank account, you’re in luck: obtaining one is as easy as buying a pair of jeans and, in all likelihood, just as cheap.

David Maimon, a criminologist and Georgia State University professor, recently shared a video from Mega Darknet Market, one of the many cybercrime syndicates slinging bank accounts like Girl Scout Cookies. Mega Darknet and similar “fraud-as-a-service” organizations peddle mule accounts from major bank brands (in this case Chase) that were created using synthetic identity fraud, in which scammers combine stolen Personally Identifiable Information (PII) with made-up credentials.

But these cybercrime outfits take it a step further. With Generative AI at their disposal, they can create SuperSyntheticTM identities that are incredibly patient, lifelike, and difficult to catch.

Aside from bank accounts, fraudsters are selling accounts on popular sports betting sites. The verified accounts—complete with name, DOB, address, and SSN—can be new or aged and even geo-located, with a two-year-old account costing as little as $260. Perfect for money launderers looking to wash stolen cash.

Fraudsters are selling stolen bank accounts as well as stolen accounts from sports betting sites.

Cyber gangs like Mega Darknet also offer access to the very Generative AI tools they use to create synthetic accounts. This includes deepfake technology which, besides fintech fraud, can help carry out “sextortion” schemes.

X-cruciatingly false

Anyone who’s followed the misadventures of X (formerly Twitter) over the past year, or used any social media since the late 2010s, knows that Elon’s embattled platform is a breeding ground for bots and misinformation. Generative AI only exacerbates the problem.

A recent study found that X users couldn’t distinguish AI-generated content (GPT-3) from human-generated content. Most alarming is that these same users trusted AI-generated posts more than posts from real humans.

In the US, where 20% of the population famously can’t locate the country on a world map, and elsewhere these synthetic accounts and their large-scale misinformation campaigns pose myriad risks, especially if said accounts are “verified.” It wouldn’t take much to incite a riot, or stoke anger and subsequent violence toward a specific group of people. How about sharing a bogus picture of an exploded Pentagon that impacts the stock market? Yep. That, too.

This fake image of an explosion near the Pentagon exemplifies the danger of synthetic accounts spreading misinformation.

Election-hacking-as-a-service

Few topics are more timely and can rile up users like election interference, another byproduct of the fake human—and fake social media—epidemic. Indeed, the spreading of false information in service of a particular political candidate or party existed well before social media, but now the stakes have increased exponentially.

If fraud-as-a-service isn’t ominous-sounding enough, election-hacking-as-a-service might do the trick. Groups with access to armies of fake social media profiles are weaponizing disinformation to sway elections any which way. Team Jorge is just one example of these election meddling units. Brought to light via a recent Guardian investigation, Team Jorge’s mastermind Tal Hanan claimed he manipulated upwards of 33 elections.

The rapid creation and dissemination of fake social media profiles and content is far more harmful and widespread with Generative AI in the fold. Flipping elections is one of the worst possible outcomes, but grimmer consequences will arise if automated disinformation isn’t thwarted by an equally intelligent and scalable solution.

Finservs in the crosshairs

Cash is king. Synthetic fraudsters want the biggest haul, even if it’s a slow-burn operation stretched out over a long period of time. Naturally, that means finservs, who lost nearly $2 billion to bank transfer or payment fraud last year, are number one on their hit list. 

Most finservs today don’t have the tools to effectively combat AI-generated synthetic and SuperSynthetic fraud. First-party synthetic fraud—fraud perpetrated by existing “customers”—is rising thanks to SuperSynthetic “sleeper” identities that can imitate human behavior for months before cashing out and vanishing at the snap of a finger. SuperSynthetics can also use deepfake technology to evade detection, even if banks request a video interview during the identity verification phase.

It’s not like finservs are dilly-dallying. In a study from Wakefield, commissioned by Deduce, 100% of those surveyed had synthetic fraud prevention solutions installed along with sophisticated escalation policies. However, more than 75% of finservs already had synthetic identities in their customer databases, and 87% of those respondents had extended credit to fake accounts.

Fortunately for finservs and others trying to neutralize synthetic fraud, it’s not impossible to outsmart generative AI. With the right foundation in place—specifically a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence—and a change in philosophy, even a foe that grows smarter and more humanlike by the second can be thwarted.

This philosophical change is rooted in a top-down, bird’s-eye approach that differs from traditional, individualistic fraud prevention solutions that examine identities one by one. A macro view, on the other hand, sees identities collectively and groups them into a single signature which uncovers a trail of digital footprints. Behavioral patterns such as social media posts and account actions rule out coincidence. The SuperSynthetic smokescreen evaporates.

Whether it’s bad actors selling betting accounts, social media platforms stomping out disinformation, or finservs protecting their bottom lines, fake humans are more formidable than ever with generative AI and SuperSynthetic fraud at their disposal. Most companies seem to be aware of the stakes, but singling out bogus users and SuperSynthetics requires a retooled approach. Otherwise, revenue, users, and brand reputations will dwindle, and the ways in which fake accounts wreak havoc will multiply.

How a top-down approach can unmask AI-generated fraudsters

Whomever’s side of the AI debate you’re on there’s no denying that AI is here to stay, and has barely started to tap its potential.

AI makes life easier on consumers and businesses alike. However, the proliferation of AI-based tools helps fraudsters as well.

As the AI arms race heats up, one emerging threat that’s tormenting businesses is AI-generated identity fraud. With help from generative AI, fraudsters can easily use previously acquired PII (Personal Identifiable Information) to establish a credible online identity that appears human-like, replete with an OK credit history, then leverage deepfakes to legitimize a synthetic identity with documents, voice, and video. As of April 2023, audio and video deepfakes alone have duped one-third of companies..

Without the proper fortification in place, financial services and fintech businesses are prime targets for AI-generated identities, new account opening fraud, and the resultant revenue loss.

The (multi)billion-dollar question is, how do these companies fight back when AI-generated identities are seemingly indistinguishable from real customers?

Playing the long game

There are several ways in which AI helps create synthetic identities.

For one, social engineering and phishing with AI-powered tools is as easy as “PII.” Generative AI can crank out a malicious yet convincing email or deepfake a document or voice to obtain personal info. In terms of scalability, fraudsters can now manage thousands of fake identities at once thanks to AI-assisted CRMs and marketing automation software and purpose-built platforms for committing fraud such as FraudGPT and WormGPT. Thousands of synthetics creating “aged” and geo-located email addresses, signing up for newsletters, and making social media profiles and other accounts—all on autopilot. This unparalleled sophistication is the hallmark of an even more formidable synthetic identity: the SuperSyntheticTM identity.

Thanks to AI’s automation and effective utilization of previously stolen PII data, SuperSynthetic identities can assemble a credible trail of online activity. But these SuperSynthetics have a credible (maybe not an 850 but a solid 700) credit history, too. Therein lies the other challenge with AI-generated identity fraud: the human bad actors behind the computer or phone screen, pulling the strings, are remarkably patient. They’ll invest actual money by making deposits over time into a newly opened bank account, or make small purchases on a retailer’s website to build “existing customer” status, to gradually forge a bogus identity that lands them North of $15K (according to the FTC, a net ROI of thousands of dollars). AI-generated fraud is a very profitable business.

The chart above shows how a fraudster boosts credibility for an identity both online and with credit history before opening a credit card or loan, or even transacting via BNPL (Buy Now Pay Later). They sign up for cheap mobile phone plans, such as Boost, Mint, or Cricket, or make small pre-paid debit card donations to charities linked to their social security number. They can even use AI to find rental vacancies in MLS listings in a geography that maps to their aged and geo-located legend, in order to establish an online activity history of paying utility bills. The patience, calculation, and cunning of these fraudsters is striking—and just as dangerous as the AI that fuels their SuperSynthetic identities.

Looking at the big picture

Neutralizing AI-generated identity fraud requires a new approach. Traditional bot mitigation and synthetic fraud prevention solutions reliant upon static data about a single identity need some extra oomph to stonewall persuasive SuperSynthetics.

These static data-based tools lack the dynamic, real-time data and scale necessary to pick up the scent of AI-generated identity fraud. Patterns and digital forensic footprints get overlooked, and the sophistication of these fake identities even outflanks manual review processes and tools like DocV.

The bigger problem is that, when today’s anti-fraud solutions pull data from a range of sources during the verification phase, they’re doing so on an individual identity basis. Why is this problematic? Because a SuperSynthetic identity on its own will look legitimate and pass all the verification checks—including a manual review, the last bastion of fraud prevention. However, analyzing that same identity from a high-level vantage point changes everything. The identity is revealed to be a member of a larger signature of SuperSynthetic identities. Like a black light, this bird’s-eye view uncovers previously obscured, digital forensic evidence. 

But what does this evidence even look like? And what does it take to transition from an individualistic to a signature-centered approach?

The key to the evidence locker

AI-generated SuperSynthetic identities leave behind a variety of digital fingerprints or signatures. A top-down view reveals suspicious patterns across millions of fraudulent identities that are too identical to be a coincidence. 

For example, if the same three identities post a comment on the New York Times website every Tuesday morning at 7:32 a.m. PST, the chances these are three humans are infinitesimally small and therefore it’s clear that each is in fact SuperSynthetic.

Switching over to a top-down approach isn’t merely a philosophical change. Unlocking the requisite evidence to thwart AI-generated identities demands premium identity intelligence at scale, combined with sophisticated ML that gathers and analyzes large swaths of real-time data from diverse sources.

In short, an activity-based, real-time identity graph capable of sifting through hundreds of millions of identities.

Protect your margins (and UX)

A ginormous real-time identity graph rivaling the likes of big tech? This may seem like an unrealistic path to stopping AI-generated identities. It isn’t.

Deduce employs the largest identity graph in the US: 780 million US privacy-compliant identity profiles and 1.5 billion daily user events across 150,000+ websites and apps. Additionally, Deduce has previously seen 89% of new users at the account creation stage—where AI-generated synthetics typically pass through undetected—and 43% of these users hours before they enter the new account portal.

Deduce’s premium identity intelligence, patented technology, and formidable ML algorithms enable a multi-contextualized, top-down approach. Identities are analyzed against signatures of synthetic fraudsters—hundreds of millions of them—to ensure they’re the real McCoy. It’s a far superior alternative to overtightening existing risk models and causing unnecessary friction followed by churn, reputational harm, and revenue loss.

Want to outsmart AI-generated identity fraud while preserving a trusted user experience? Contact us today.