The war between democracy and generative AI rages on

2024 is a big year for democracy. Half of the global population resides in countries that have an election this year, and all eyes, of course, will be on the main event: the grudge match between Joe Biden and Donald Trump.

But running parallel to this frenzied election year is the rapid evolution of artificial intelligence and its ever-growing assortment of applications. Just when we think it can’t get any smarter or more believable, AI leapfrogs our expectations. It’s exciting for users looking to post amusing memes or videos in their slack channels—not so much for politicians. Voters are frustrated, too. Is that audio clip of a candidate’s off-color remarks genuine, or deepfaked malarkey?

The deceptive tactics of generative AI-powered deepfakes and fake humans—and their potential to swing elections—isn’t just fodder for cybersecurity trades. Biden issued an executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” late last year. Several tech giants, including Google, Microsoft, and OpenAI, recently met at the Munich Security Conference and agreed to take “reasonable precautions” in stopping AI from influencing elections.

Executives from leading tech companies gather at the 2024 Munich Security Conference.

But Biden’s executive order doesn’t sufficiently address synthetic fraud, the Munich pact, according to critics, isn’t proactive enough, and fraudsters (especially those with AI at their disposal) are always a step ahead regardless of the countermeasures platforms or the government put in place. Furthermore, the tech companies tasked with hosting and moderating deepfaked content have laid off more than 40K workers. Without a new approach to neutralizing synthetic fraudsters, the fakery will continue to snowball.

Here are the ways in which generative AI is defrauding elections globally, and how a re-tooled approach may help social media and AI platforms fight back.

1-800-ROBO-CALL

Video deepfakes steal most of the headlines, but AI-generated audio is more advanced and democratized (at least until hyper-realistic video offerings like OpenAI’s Sora become widely available). One could even argue that deepfaked audio is more effective in altering elections, especially after a Biden robocall tried to dissuade people from voting in the New Hampshire primary.

Context, or lack thereof, is what makes audio deepfakes tough to recognize. The voters on the other end of the line lack the visual indicators that give video deepfakes away.

This context deficit bolsters the believability of so-called “grandparent” scams as well, in which a fraudster clones the voice of someone who’s close to the victim and convinces them to wire money. Personalization brings credibility. Just as Cameo users can have celebrities record birthday wishes for a loved one, now AI applied to voice or video patterns can have a personality or politician record a custom message.

If you’re in the business of artificially swaying voter sentiment and rigging elections, simply copy the voice of a relative or friend, spew some disinformation about Candidate XYZ or Prop ZYX, and move on to the next robocall.

In February, the FCC banned robocalls that use AI-generated voices. Time will tell if this puts audio deepfakers on hold. (Don’t count on it.)

A picture’s worth a thousand votes

AI image generators are also under the microscope. The Center for Countering Digital Hate, a watchdog group, found that tools like Midjourney and ChatGPT Plus can create deceptive images capable of spreading false political information.

The study, which additionally tested DreamStudio and Microsoft’s Image Creator, was able to create fake election imagery in more than 40% of cases. Midjourney performed significantly worse, generating disinformation 65% of the time—not a huge surprise considering the company didn’t sign the Munich Security Conference pact and only employs 11 team members.

The realistic nature of these images is startling. In March, an AI-generated photo purporting to show black Trump supporters posing with the former president was deemed fraudulent, apparently in an attempt to draw black voters away from the Biden campaign. Several AI-generated and equally bogus images of Trump being arrested also proliferated across social media.

AI-generated political images are incredibly lifelike.

Since the watchdog report, leading AI generators have put guardrails in place. The most obvious move is to disallow prompts involving “Biden” or “Trump.” However, jailbreaking maneuvers can sometimes bypass such controls. For example, instead of typing a candidate’s name, bad actors can key in their defining physical characteristics along with, say, “45th president,” and produce the desired image.

Take political candidates out of the equation. There are still other visuals that can sway voters. How about a fake image of a Trump supporter smashing a ballot box open, or Biden supporters lighting Mar-a-Lago ablaze? Election tampering campaigns don’t always target a specific candidate or political party but rather a divisive issue such as freedom of choice or border control. For instance, images of migrants illegally crossing the Rio Grande or climbing a fence, fake or not, are bound to rile up one group of voters.

A global crisis

International examples of AI-based election interference could portend trouble for the US, but hopefully will inspire technologists and government officials to rethink their cybersecurity approach.

In Slovakia, a key election was tainted by AI-generated audio that mimicked a candidate’s voice saying he had tampered with the election and, worse (for some voters) planned to raise beer prices. Indonesian Gen-Z voters warmed up to a presidential candidate and previously disgraced military general thanks to a cat-loving, “chubby-cheeked” AI-generated image of him. Bad actors in India, meanwhile, are using AI to “resurrect” dead political figures who in turn express their support for those currently in office.

An AI-generated avatar of M Karunanidhi, the deceased leader of India’s DMK party.

The image of the Indonesian presidential candidate is nothing more than a harmless campaign tactic, but are the other two examples the work of election-hacking-as-a-service schemes? Troubling a term as it might be, this is our new democratic reality: hackers contracted to unleash hordes of synthetic identities across social media, spreading false, AI-generated content to influence voter sentiment however they please.

An Israeli election hacking group dubbed “Team Jorge,” which controls over 30K fake social media profiles, meddled in a whopping 33 elections, according to a Guardian report. If similar groups aren’t already threatening elections in the US, they will soon.

The road ahead

Combatting AI-powered election fraud is an uphill battle, and Midjourney CEO David Holz believes the worst is yet to come. “Anybody who’s scared about fake images in 2024 is going to have a hard 2028,” Holz warned during a recent video presentation. “It will be a very different world at that point…Obviously you’re still going to have humans running for president in 2028, but they won’t be purely human anymore.”

What is the answer to this problem, this future Holz sees in which every political candidate has a lifelike “deepfake chatbot” armed with manufactured talking points? Raising public awareness of generative AI’s role in election tampering is important but, ironically, that can also backfire. As more people learn about the complexity and prevalence of deepfaked audio, video, and images, a growing sense of skepticism can hinder their judgment. Known as the “liar’s dividend” concept in political circles, this causes jaded, deepfake-conscious voters to mislabel genuine media as fake. It doesn’t help matters when presidential candidates label mainstream media similarly while publicizing their own view of the world.

Social media and generative AI platforms have their work cut out for them. Neutralizing, much less curbing, AI-powered election fraud pits them against artificial intelligence and synthetic identities that are disturbingly lifelike and nearly undetectable. This includes SuperSynthetic™ “sleeper” identities that can hack elections just as easily as they swindle finservs.

Deepfaked synthetic identities are too smart and real-looking to face head-on. Stopping these slithery fraudsters requires an equally crafty strategy, and a sizable chunk of real-time, multicontextual, activity-backed identity intelligence. Our money is on a “top-down” approach that, prior to account creation, analyzes synthetic identities collectively rather than individually. This bird’s eye view picks up on signature online behaviors of synthetic identities, patterns that rule out coincidence.

The Deduce Identity Graph is monitoring upwards of 30 million synthetic identities in the US alone. Some of these identities will attempt to “hack the vote” come November. Some already are. A high-level approach that examines them as a group—before they can deepfake unsuspecting voters—may be democracy’s best shot.