AI deepfakes have become incredibly common, and a lot more convincing. Now that we have multimodal generative AI apps, capable of creating hyper-realistic images, videos, and even human-sounding voices, it’s much harder to trust anything we see online.
In fact, around 60% of consumers say they’ve encountered at least one deepfake video in the last year. However, the number of people interacting with deepfakes may be much higher. After all, one study found that human beings could only identify high-quality deepfakes 24.5% of the time.
AI deepfakes aren’t necessarily “all bad”. They can serve positive purposes – just look at Zoom’s AI avatars which can help employees save time on attending meetings.
The trouble is that the risks of deepfakes are rapidly outweighing the benefits, creating significant ethical, security, and privacy risks for everyone.
Here’s everything you need to know about AI deepfakes.
What are AI Deepfakes?
AI deepfakes (usually just called deepfakes), are realistic-looking images, videos, or audio clips used with artificial intelligence. They’re designed to “replicate” real people.
You’ve probably encountered some type of “deepfake” already. There are plenty of fabricated videos out there of politicians, influencers, and celebrities, saying things or doing things they never would have done themselves.
Sometimes, they’re intended for entertainment purposes. For instance, Eminem created a deepfake to portray his alter ego, “Slim Shady,” at an event. More commonly though, they’re used to deceive, defame, and spread disinformation.
Notably, deepfakes aren’t entirely new. The term was actually coined back in 2017, when people started using face-swapping and photo editing technology to create convincing images and videos of their favorite celebrities doing certain things.
However, AI deepfakes are becoming more common now that we have generative AI tools, like OpenAI’s ChatGPT and DALL-E 3. These tools give anyone (even those without a lot of technical knowledge) the tools they need to create truly convincing deepfakes at scale.
Understanding AI Deepfake Technology
Several years ago, creating deepfakes meant working with a range of different software solutions, such as photo editing tools, lip-syncing software, and audio systems. Now, the development of deepfakes is becoming easier, thanks to the rise of advanced AI tools with enhanced deep learning capabilities and content generation features.
“GAN” (Generative Adversarial Network) technology is at the heart of the deepfake explosion. This form of deep learning technology, often used in large language models, relies on a “generator” to create content, and a “discriminator” which attempts to define whether the content is fake. Over time, GAN networks learn from their mistakes, and create more realistic, authentic-looking content.
On top of GANs, deepfakes can also leverage:
- Convolutional neural networks: CNNs analyze patterns in visual data, making them effective for facial recognition and movement tracking.
- Autoencoders: This form of neural network technology identifies relevant attributes of a target, such as body movements or facial expressions, and imposes them onto a video.
- Natural language processing technology: NLP solutions are used to create deepfake audio, analyzing the attributes of a person’s speech, then generating similar audio.
- High-performance computing: High-level computing systems provide the necessary power required to create, train, and fine-tune deepfakes.
- Video editing software: AI-powered video editing software can help people create deepfakes to refine outputs and improve realism.
There are already tools across the web that can use all of these technologies in tandem to create deepfakes in seconds. For instance, you might have encountered options like FaceApp, FaceMagic, Deep Art Effects, and DeepSwap.
How Do AI Deepfakes Work?
Deepfakes combine data and artificial intelligence to mimic another person’s voice, image, or likeness. Though there are various ways to create deepfakes – all methods rely on a specific process, which often starts with data collection.
AI tools need “input”, or data about the person they’re going to mimic to create a realistic deepfake. The more visual and audio data you can provide, the more convincing the deepfake will be. Highly advanced deepfakes are usually created using hundreds of images, videos, and audio clips, which train the model to replicate the nuances of another person.
Following data collection, machine learning models (typically GANs and autoencoders) get to work on generating an output. The autoencoder compresses all the data you share with the system into a smaller, more specific representation, then decompresses it back to its original form. This helps with tasks like swapping faces, by reducing high-dimensional data (images) into low-dimensional code that can be modified and reformed into a new image.
The GAN system uses its “generator” to create an image, while the discriminator evaluates them. Over time, the generator learns to make more accurate forgeries, and the discriminator becomes better at spotting fakes. This enhances the quality and realism of the generated outputs.
Most models undergo extensive training, requiring significant computational power and fine-tuning. Once the model is adequately trained, it uses “generative” tools to create deepfake videos, images, or audio, which can then be refined with editing tools.
Can AI Deepfakes Be Used for Good?
As mentioned above, AI deepfakes are generally more “dangerous” than they are beneficial. All forms of deepfakes pose ethical issues, but there are instances where these resources can be used for good. For instance, a David Beckham deepfake was once used for a “Malaria Must Die” advertising campaign, which helped to raise awareness of the condition.
Additionally, deepfakes can be used to create content for helpful purposes. For instance, Reuters uses deepfake videos of reporters to share news. Even companies like Zoom are empowering employees to create “deepfake” avatars they can use to create video clips, or participate in meetings.
Deepfakes are even popular for creating and enhancing existing pieces of art. They’ve been used regularly in the film and music industry and even helped to produce the “Dali Lives” exhibition in Barcelona. The reality is that there are a lot of “uses” for deepfakes, such as:
Enhancing Entertainment and Media
In the entertainment space, deepfakes are more common than you might think. Filmmakers and producers use deepfake technology to resurrect deceased actors, de-age or age actors in movies, and apply various visual effects. This allows for a lot of creative flexibility.
Even everyday people have used deepfake technology to “edit” their favorite films, like embedding Jerry Seinfeld into one of the most famous scenes from Pulp Fiction.
AI Deepfakes for Content Personalization
Deepfakes have also been used to personalize certain types of content for marketing and advertising purposes. For instance, a company could create a commercial featuring a deepfake version of a celebrity speaking various languages, making the content more relatable to diverse audiences.
Additionally, companies can create personalized video messages for customers using deepfakes, potentially increasing customer loyalty and engagement.
Education and Training
Another potentially beneficial use case for deepfakes comes from the educational sector. With deepfakes, educators can create interactive learning experiences that feel more immersive and engaging. They could bring historical figures that can tell students about their experiences or create videos of scientists explaining a concept in simple language.
This could potentially make learning more accessible and engaging, particularly for people who prefer to learn through visual content.
Art and Social Commentary
Both artists and activists have previously used AI deepfakes to create thought-provoking pieces and videos connected to social issues. By altering and reimagining events, deepfakes can help to challenge the perceptions of viewers and encourage deeper conversations about ethics.
Some creators have even designed videos that question the dangers of deepfakes themselves, such as the famous YouTube video “This is not Morgan Freeman”.
Research and Development
Researchers are employing deepfakes in various fields, from psychology to facial recognition. By generating various scenarios, facial expressions, and movements, professionals can more effectively study human behavior and emotional responses.
Plus, they can create more advanced facial recognition systems, which could potentially lead to the development of better authentication and security tools.
Why Are AI Deepfakes Dangerous? The Risks
Ultimately, while there are positive aspects to deepfakes, there are also many downsides. In the last couple of years alone, there have been countless media reports about deepfakes that have defamed or deceived people around the world.
For instance, in 2019, Mark Zuckerberg was the victim of a deepfake video that showed him telling users that he “owned” the public. In 2020, deepfakes started to emerge in the political landscape, with countless images and videos of US President Joe Biden showing him in increasing stages of cognitive decline. Donald Trump and Barack Obama have also been victims of deepfake videos used to spread disinformation and cause mistrust.
On a broad scale, deepfakes pose the following risks:
The Spread of Misinformation
Deepfakes create skepticism and mistrust. They make it difficult to know whether we can believe what we see and hear online. This has led to significant issues with how people see and consume media – to the point where people don’t trust the news reports and alerts they see anymore.
The proliferation of deepfakes is rapidly eroding public trust in legitimate news sources, as only around 22% of people feel confident that they can detect a deepfake straight away. When nobody knows what’s “real” and what’s “fake,” confusion and chaos abound.
Privacy Violations
The evolution of different types of AI capable of creating more sophisticated deepfakes has introduced a new slew of privacy issues into the modern world. As deepfakes become more accessible and easy to create, anyone can potentially steal a person’s image, voice, or likeness.
This means anyone could wake up one day and see a video circulating online that shows them doing or saying something they would never do or say. This is clearly a significant violation of privacy and human rights. Even videos that seem “fun” like clips of Tom Cruise in various scenarios on TikTok, can harm a person’s reputation.
Security Issues
Criminals can create AI videos and voices with deepfakes that can potentially bypass biometric and liveness checks, allowing them to steal sensitive information or commit fraudulent acts. For instance, a criminal could use voice samples to open a bank account or take over an existing bank account and withdraw money. They could even use Deepfakes to steal credentials and credit card information as part of comprehensive phishing scams.
AI deepfakes are increasingly being used in various forms of financial deception and fraud. Some criminals have even attempted to replicate the voices and images of a person’s loved ones to dupe them into sending money to hidden accounts.
Harassment and Blackmail
Harassment and blackmail are two of the most common examples of how AI deepfakes can be used to harm other people. For years now, criminals have used deepfake technology to add a person’s face or image to pornographic images and videos, promising to damage a person’s reputation if they don’t pay for the criminal’s silence.
Some malicious individuals have even taken these steps in an act of “revenge” to potentially punish an ex-partner, and put their reputation into question.
Political Upset
In the political world, where credibility and trust are crucial, AI deepfakes have emerged as a tool for causing significant upheaval. Hyper-realistic audiovisual content can stir controversy, create false perceptions, and destabilize electoral processes.
The implications of these deepfakes are enormous, affecting not just individual politicians but entire political landscapes across countries.
Are AI Deepfakes Legal?
So, are AI deepfakes legal if they’re so dangerous? The unfortunate answer is yes – at least for the most part. Deepfakes are currently only generally illegal if they violate existing laws.
However, throughout the world, government groups are taking steps to fight back against damaging deepfakes. For instance, around 40 states in the US have legislation pending aimed at reducing the use of deepfakes, and five states have banned deepfakes used to influence elections.
We don’t have many official laws linked to AI deepfakes yet because people still don’t fully understand the dangers of the technology. Fortunately, there are some legislative efforts taking place that could encourage positive action against deepfakes, such as:
- The DEFIANCE Act: If passed, the DEFIANCE act would be the first federal law to protect victims of deepfakes. It would allow victims to sue deepfake creators if they produce content they didn’t “consent” to being a part of.
- Preventing Deepfakes of Intimate Images Act: Congressman Joe Morelle introduced this act in May 2023. It would criminalize the non-consensual sharing of deepfakes related to intimate images.
- Take it Down Act: This act, sponsored by Senator Ted Cruz, would criminalize the publishing of “revenge porn”. It would also require social media platforms to develop a strategy for removing images and videos within 48 hours of a request.
- The Deepfakes Accountability Act: Introduced in 2023 by Gleen Ivey and Yvette Clark, this act requires creators to add digital watermarks to deepfake content.
How To Protect Yourself from AI Deepfakes
Ultimately, since laws and legislation are still being developed, it’s mostly up to individuals to ensure they’re protecting themselves against deepfakes. This starts with protecting the images and videos you share of yourself online, ensuring that you limit the exposure of this content to potentially malicious actors. However, this isn’t easy in the age of social media.
The best things you can do to keep yourself safe include:
1. Learn How to Identify Deepfake Signals
Again, it’s challenging to differentiate between “real” and fake content today. However, there are things you can do to help you assess the authenticity of a piece of content, such as:
- Scrutinizing facial features: Look for inconsistencies in the subject’s eyes, such as unnatural blinking patterns or mismatched eye movements. Pay attention to skin tones and textures, and look for signs of unusual “smoothness” in a subject’s skin.
- Examine the background: Deepfake technology can struggle to blend a subject into its background seamlessly. Look for irregularities in shadows, perspective, and lighting.
- Listen for audio issues: Deepfake videos can include audio that doesn’t precisely match the lip movements of the speakers. They can also include unnatural pauses, distortions, and sudden pitch changes. Or they might lack certain inflections and emotions.
- Consider quality: Many deepfakes have lower video quality than authentic videos, so look for signs of unnatural movements, blurring, or pixelation.
2. Educate Yourself on AI Deepfakes
Stay up to date with the latest capabilities of AI technology, particularly when it comes to audio and video generation. Understanding how AI content is generated can help you to scrutinize content more effectively as AI becomes more advanced.
Read news reports about AI deepfakes, too. These can give you insights into new potential “red flags” you might need to watch out for when watching or interacting with content online.
3. Use Trusted Platforms and Verify Sources
Remember, seeing is no longer believing. Don’t simply trust everything you see online. When possible, ensure you’re consuming content from reputable sources and platforms prioritizing authenticity. For instance, new YouTube policies have already been implemented that require creators to disclose when content has been meaningfully altered or synthetically generated.
Even when consuming content from reputable sources, always verify the “facts” you’re given. Do your research and check for evidence that the information is true, before sharing it online.
4. Use Technology to Identify AI Deepfakes
As AI deepfakes have grown more prominent, various technology leaders have begun creating solutions to help users identify deepfakes. For instance, the “Intel FakeCatcher” analyzes subtle physiological details in videos to distinguish whether the images are real or fake.
Microsoft has its own AI-powered deepfake detection software that analyzes photos and videos and gives them a “confidence” score based on how likely they are to be authentic. You can also experiment with solutions like:
- Sensity AI: A detection platform that uses deep learning to identify examples of synthetic media, similar to how anti-malware tools look for malware and virus signatures.
- Operation Minerva: This platform uses catalogs of previously identified deepfakes to determine if new videos modify existing fakes.
- Sentinel: This cloud-based solution offers real-time deepfake detection services using various technologies, such as facial landmark analysis, temporal consistency checks, and flicker detection.
Looking to the Future of AI Deepfakes
Right now, AI deepfakes are growing in prominence at an incredible scale. Although there are beneficial use cases for deepfakes, the reality is that they’re usually more dangerous than they are advantageous. Unfortunately, for the time being, there aren’t a lot of laws, regulations, or guardrails in place that actively stop people from creating deepfakes.
In fact, thanks to the rise of generative AI, it’s easier than ever to create convincing deepfakes. However, steps are being taken to protect users, communities, and companies against the threats of deepfakes. Legislative acts are being drafted around the world.
Even “Ofcom” in the UK has created the “Online Safety Act”, for the UK, which aims to help outlaw the use of AI deepfakes for malicious purposes.
For now, the best thing you can do is remain vigilant, and stay skeptical of the information you see online. Deepfakes are everywhere, and it’s up to you to ensure you don’t believe everything you see.