Deepfakes and Biometrics: Risks, Examples, and 5 Tips to Spot Fakes
January 15, 2025 | 5 minute read
Deepfakes use AI to manipulate media including audio or video to create hyper-realistic, but entirely fake, content of real people. While they can be entertaining or even harmless in some cases, they can also pose real threats—including political manipulation, fraud and invasion of privacy.
The advancement of artificial intelligence and deepfake technology has made it easier than ever to create convincing false content. And with misinformation and disinformation on the rise in our politically polarized modern society, the consequences can be dangerous. Whether it’s a celebrity endorsing a cause they don’t support or someone tricking a financial institution with a fake voice, deepfakes are becoming a serious issue in today’s digital world.
Political Deepfakes
In July of 2024, Elon Musk, CEO of Tesla and owner of X (formerly Twitter), shared to his 199.2 million X followers a deepfake video of vice president and presidential candidate Kamala Harris cursing and saying out of character statements. One month later, former president and president elect Donald Trump shared to his 7.78 million followers on his Truth Social platform a deepfake image of popular singer Taylor Swift endorsing his 2024 presidential campaign, along with several images of digitally rendered young women in “Swifties for Trump” t-shirts. These deepfakes quickly went viral.
Aside from being dishonest, deepfakes depicting public figures can polarize public opinion, create chaos and even influence elections—and at lightspeed! A deepfake can go from a few retweets to global confusion in a matter of hours.
While some view this type of content as just funny and harmless parody, when being shared by public and political figures with a large reach and depicting public and political figures with an outsize impact on our elections (when Taylor Swift did eventually endorse a candidate, 35,000 new voters registered within 24 hours), the potential for confused voters is quite high.
No matter where you fall on the political spectrum, deepfakes are an unfortunate obstacle in the pursuit of truth.
Voice Deepfakes for Fraud
It’s not just video content that’s used in AI manipulation. Voice deepfakes have emerged as a new and troubling trend in scams. One high-profile case involved a UK-based energy firm being scammed out of $243,000 when criminals enacted a “vishing” campaign (“vishing” is short for “voice phishing,” the tactic of tricking targets over the phone) by using an artificial voice so similar to the boss of the German parent company of the firm that the employees didn’t notice a difference. This incident marks the first time AI-based voice fraud has netted such a high payload, according to The Next Web.
By mimicking a familiar voice, scammers were able to execute a financial heist with shocking precision. This kind of attack highlights the growing threat to both financial and personal security. When you can’t trust the voices you hear, verifying identities becomes a major challenge for businesses and individuals alike.
The Crisis of Deepfake Pornography
The dark side of deepfake technology also invades people’s most personal spaces. Deepfake pornography, where a person’s face is superimposed onto explicit content without their consent, has become a growing issue, disproportionately affecting women. Politician Alexandria Ocasio-Cortez has been the target of deepfake pornography and one recent study found that 98 percent of deepfake videos online were pornographic and that 99 percent of those targeted were women or girls.
This misuse of AI is a serious violation of privacy and personal safety. It not only tarnishes reputations but also causes psychological harm. Victims are often left with little legal recourse (there are many states that have not criminalized deepfake revenge porn), and the damage done to their personal and professional lives can be irreversible.
Deepfake Ads and False Endorsements
Deepfakes are also now creeping into the world of advertising. These AI-generated ads can mislead consumers into thinking their favorite celebrity is behind a product or service when, in fact, they aren’t involved at all.
These deepfake ads present major legal and ethical concerns. They can tarnish a celebrity’s brand and lead to lawsuits over false advertising. For consumers, it’s yet another layer of deception to navigate in an already complex digital advertising world.
5 Practical Tips for Spotting Deepfake Content
So how can you protect yourself from falling for a deepfake? Here are a few practical tips:
- Look for unnatural eye movements – Deepfake technology struggles to replicate the natural blink rate and subtle eye movements of real humans.
- Check for inconsistencies in lighting – If the lighting on the person doesn’t match the background, it might be a deepfake.
- Watch for odd facial expressions – Deepfakes often have facial movements that seem slightly off, like a smile that doesn’t quite match the tone of the conversation.
- Listen closely – If it’s a voice deepfake, the tone may be flat, or the speech pattern might sound unnatural.
- Cross-reference sources – If a piece of content looks suspicious, check it against trusted news sources to confirm its authenticity.
A Future with Deepfakes
Deepfakes are no longer just the stuff of science fiction – they are here, and they’re causing real harm. From political manipulation and financial fraud to privacy violations and misleading ads, the potential for misuse is vast. As we’ve seen from these examples, the key lesson is to remain vigilant and critical of the content we consume—always investigate sources and double-check with other news outlets if an image or audio seems suspect.
The future of deepfakes will continue to challenge our legal systems, ethical standards, election processes and even the way we perceive reality. As the technology behind deepfakes evolves, so must our strategies for detecting and preventing their misuse.
Unmasking Deepfakes: Protecting Your Business in the Age of AI Fraud
Wed, January 29, 2025 | 2:00 PM EST
AI-generated fraud—through deepfakes, GenAI videos, and synthetic images—is rapidly evolving, posing a growing risk to organizations.
Join Enrique Caballero, a digital identity expert, and Angela Diaco, Senior Marketing Director at Aware, as they break down the latest fraud trends and reveal tools and strategies to protect your business.
During the session, we’ll explore:
- The industries most impacted by AI fraud today and what’s coming next.
- How deepfakes and GenAI videos undermine digital identity verification.
- The biometric technologies designed to combat these emerging threats.
- Insights into NIST FATE rankings and how Aware leads the biometric security space.