Some assembly required, but not much

You can be anyone you want to be.

Utter these words to your average cynic and their eyes will roll out of their sockets. But, thanks to AI, this phrase is now more truism than affirmation.

For fraudsters, AI may as well be a giant check from Publishers Clearing House. AI-generated synthetic identities net hefty payouts with minimal effort. Bad actors can seamlessly create and orchestrate synthetic identities at scale to fake out banks, execute election hacking schemes, or any other plot requiring AI-powered chicanery.

How does one go about creating a synthetic identity? It’s easier, and more lucrative, than you might think: Arkose Labs estimates that one in four new accounts is fake, and reports that these fake bots and users steal $697B annually.

We’ve outlined the steps to making a synthetic identity below. (Insert “Don’t try this at home” disclaimer here.) No, we’re not trying to add more reinforcements to the growing army of AI-generated fraudsters. Just making sure banks and other finservs grok the magnitude of this cunning and highly intelligent cyberthreat.

Let’s dig in.

Step one: breaching bad

Creating synthetic identities begins with a big bang. A sizable breach occurs, like the recent AT&T heist affecting 70M+ total customers, and oodles of PII (personally identifiable information) are stolen and subsequently sold on the dark web. (A cursory look on Telegram will surface half a dozen “data brokers” offering data from AT&T.)

Recently deceased people’s SSNs (social security numbers) and infant SSNs are another crowd favorite for fraudsters. After all, the first group won’t need them again and the second group likely won’t need it for a couple of decades. In fact, Equifax—yes, the one from the 2017 data breach involving 147M stolen identities—recently announced they had 1.8M fake identities with SSNs in their database. 

PII is the lifeblood of any synthetic identity, and the dark web is essentially a flea market where the basic building blocks of a synthetic ID—first names, last names, SSNs, and DOBs (dates of birth)—can be purchased for pennies on the dollar.

On the dark web a synthetic fraudster buys a large batch of PII, usually tens of thousands of identities’ worth. Using stolen SSNs they can access FICO data, without triggering an alert to the legitimate owner of the number, then leverage AI to organize the thousands of identities by credit score (less than 600, 600-700, 700-800, etc.). Identities with scores below 700 would be matched to activities that bolster credit scores such as making charitable donations, and applying for and paying sub-prime or same-day loans. This essentially amounts to “pig butchering” their credit score over 700. (More on this later.)

Step two: signs of life

Next up: It’s time to give this fake human a “pulse.”

The first priority is to add an email address, ideally an aged and geo-located email address. Penny pinchers can create a free email address, but in either case the fraudster communicates via this email to build credibility. The email would also need to be matched with an identity in the same geography. It helps if the stolen identity boasts a high enough credit score to convince banks they’re onboarding an attractive new customer, but some fraud opportunities (such as subprime lending) don’t require a top-notch credit score.

Step two is to nab a new phone number, which comes in handy for authentication purposes. Apply the phone number to a cheap Boost Mobile phone or the like, and that’s enough to bypass 2FA (two-factor authentication) and OTP (one-time passcodes).

A fraudster using multiple smartphones to manage synthetic identities

Once the new synthetic account is live, it must avoid suspicion by interacting and existing online like a real human would. Filling out the rest of their profile details. Chatting with online support agents on e-commerce websites. Clicking the ads on a bank’s website that offer opportunities to apply for credit cards and loans.

Fake identities can further legitimize themselves by building out social media profiles on platforms like X or LinkedIn. Who’s to say they don’t actually work for IBM or some other Fortune 100 stalwart, or weren’t a graduate of Harvard or some other Ivy League school? Do customer onboarding teams have time to poke holes? Probably not.

Step three: building credit

After a synthetic identity acts like it’s a real human, all that’s left is to build credit before cashing out and moving on to another unsuspecting bank.

This is the trademark of the latest iteration of synthetics, known as SuperSynthetic™ identities. Rather than putting on ski masks and bull-rushing banks, SuperSynthetics prefer to take their sweet time.

Over the course of several months, a SuperSynthetic bot leverages its AI-generated identity to digitally deposit small amounts of money. In the meantime, it interacts with website and/or mobile app functions so as to not raise suspicion. SuperSynthetics might also build credit history by paying off cheap payday loans, and donating to charities that tie activity to its stolen SSN. While these modest deposits accumulate, the SuperSynthetic identity continues to consistently access its bank account (checking its balance, looking at statements, etc.).

The next generation of bots: SuperSynthetic identities

Eventually, the reputation of a “customer in good standing” is achieved. The identity’s credit risk worthiness score increases. A credit card or loan is extended. The fraudster starts warming up the getaway car.

Months of patience (12-18 months, on average) finally pays off when the bank deposits the loan or issues the credit card and the synthetic identity cashes out. It’s a systematic, slow-burn operation and it’s executed at scale. SuperSynthetic “sleeper” identities are actively preying on banks and finservs—by the thousands.

What now?

AI-powered synthetic and SuperSynthetic identities are wicked-smart, as are the criminal enterprises deploying them en masse. These aren’t black-hoodied EECS dropouts operating out of mom’s basement; the humans behind fake humans are well-funded and know who to target, namely smaller financial organizations like credit unions that lack the data and extensive fraud stacks and teams of a Bank of America or Chase.

Individuals aren’t safe either. Today’s fraudsters are leveraging social engineering and conversational AI tools such as ChatGPT to swindle regular Joes and Jans. Take “pig butchering” scams, for example. These drawn-out schemes start as a wrong number text message before, weeks or months later, recipients are tricked into making bogus crypto investments.

And if synthetic or SuperSynthetic identities require an extra layer of trickery, they can always count on deepfakes. Generative AI has elevated deepfakes to hyperrealistic proportions. To wit: A finance worker in China wired $25M following a video call with a deepfaked CFO.

Creating a synthetic identity is easy. Stopping one is tough. But the latter isn’t a lost cause.

A hefty amount of real-time, multicontextual, activity-backed identity intelligence is just what the synthetic fraud doctor ordered. That’s part of the solution, at least. Banks also need to switch up the philosophical approach underpinning their security stacks. The optimal of which is a “top-down” strategy that analyzes synthetic identities collectively rather than individually.

Doing this preemptively—prior to account creation—detects signature online behaviors and patterns of synthetic identities that otherwise would get lost in the sauce. Coincidence is ruled out. Synthetics are singled out.

If banks and finservs have any chance of neutralizing the newest evolution of synthetic fraudsters, this is the ticket. But the clock is ticking. SuperSynthetic identities grow in strength and number by the day. Businesses may not feel the damage for weeks or even months, but that long dynamite fuse culminates in a big, and possibly irreversible, boom.