How SuperSynthetic identities carry out modern day bank robberies

The use cases for generative AI continue to proliferate. Need a vegan-friendly recipe for chocolate cookies that doesn’t require refined sugar? Done. Need to generate an image of Chuck Norris holding a piglet? You got it.

However, not all Gen AI use cases are so innocuous. Fraudsters are joining the party and developing tools like WormGPT and FraudGPT to launch sophisticated cyberattacks that are significantly more dangerous and accessible. Consumer and enterprise companies alike are on high alert, but fintech organizations really need to upgrade their “bot-y” armor.

Each new wave of bots grows increasingly stronger and brings its unique share of challenges to the table—none more than synthetic “Frankenstein” identities consisting of real and fake PII data. But, alas, the next evolution of synthetic identities has entered the fray: SuperSyntheticTM identities.

Let’s take a closer look at how these SuperSynthetic bots came to be, how they can effortlessly defraud banks, and how banks need to change their account opening workflows.

The evolution of bots

Before we dive into SuperSynthetic bots and the danger they pose to banks, it’s helpful to cover how we got to this point.

Throughout the evolution of bots we’ve seen the good, the bad, and the downright nefarious. Well-behaved bots like web crawlers and chatbots help improve website or app performance; naughty bots crash websites, harm the customer experience and, worst of all, steal money from businesses and consumers.

The evolutionary bot chart looks like this:

Generation One: These bots are capable of basic scripting and automated maneuvers. Primarily they scrape, spam, and perform fake actions on social media apps (comments, likes, etc.).

Generation Two: Web analytics, user interface automation, and other tools that enable the automation of website development.

Generation Three: This wave of bots adopted complex machine learning algorithms, allowing for the analysis of user behavior to boost website or app performance.

Generation Four: These bots laid the groundwork for SuperSynthetics. They’re highly effective at simulating human behavior while staying off radar.

Generation Five: SuperSynthetic bots with a level of sophistication that negates the need to execute a brute force attack hoping for a fractional chance of success. Individualistic finesse, combined with the bad actor’s willingness to play the long game, makes these bots undetectable by conventional bot mitigation and synthetic fraud detection strategies.

Playing the slow game

So, how have SuperSynthetics emerged as the most formidable bank robbers yet? It’s more artifice than bull rush.

Over time, a SuperSynthetic bot uses its AI-generated identity to deposit small amounts of money via Zelle, ACH, or another digital payments app while interacting with various website functions. The bot’s meager deposits accumulate over the course of several months, and regular access to its bank account to “check its balance” earns the reputation of a “customer in good standing.” Its credit risk worthiness score increases and an offer of a credit card or a personal, unsecured loan is extended.

At this point it’s hook, line, and sinker. The bank deposits the loan amount or issues the credit card and the fraudster transfers it out, along with their seed funds, and moves on to the next unsuspecting bank. This is a cunning, slow-burn operation only a SuperSynthetic identity can successfully carry out at scale. Deduce estimates that between 3-5% of accounts onboarded within the past year at financial services and fintech institutions are in fact SuperSynthetic Sleeper identities.

Such patience and craftiness is unprecedented in a bot. Stonewalling SuperSynthetics takes an equally novel approach.

A change in philosophy

Traditional synthetic fraud prevention solutions won’t detect SuperSynthetic identities. Built around static data, these tools lack the dynamic, real-time data and scale needed to sniff out an AI-generated identity. Even manual review processes and tools like DocV are no match as deepfake AI methods can create realistic documents and even live video interviews.

An individualistic approach offers little resistance to SuperSynthetic bots.

Fundamentally, these static-based tools take an individualistic approach to stopping fraud. The data that’s pulled from a range of sources during the verification phase is only analyzing one identity at a time. In this case, a SuperSynthetic identity will appear legitimate and pass all the verification checks. Fraudulent patterns missed. Digital forensic footprints overlooked.

A philosophical change in fraud prevention is foundational to banks keeping SuperSynthetic bots out of their pockets. Verifying identities as a collective group, or signature, is the only viable option.

A view from the top

Things always look different from the top floor. In the case of spotting and neutralizing SuperSynthetic identities, a big-picture perspective reveals digital footprints otherwise obscured by an individualistic anti-fraud tool.

A bird’s-eye view that groups identities into a single signature uncovers suspicious evidence such as simultaneous social media posts, concurrent account actions, matching time-of-day and day-of-week activities, and other telltale signs of fraud. Considering the millions of fraudulent identities in the mix, it’s illogical to attribute this evidence to mere happenstance.

There’s no denying that SuperSynthetic identities have arrived. No prior iteration of bot has ever appeared so lifelike and operated with such precision. If banks want to protect their margins and user experience, verifying identity via a signature approach is a must. This does require bundling existing fraud prevention stacks with ample (and scalable) real-time identity intelligence, but the first step in thwarting SuperSynthetics is an ideological one: co-opt the signature strategy.