In recent weeks, the term ‘cheapfake’ has shot to the forefront of our national consciousness. Cheapfakes — and their equally disruptive counterpart, deepfakes — are becoming much more prevalent today, with the volume of this misleading content estimated to be doubling online every six months. That’s why the world’s leading search engines, social media networks and content publishers are taking notice. In recent weeks, Google announced a far-reaching plan to reduce the discoverability of deepfakes in their search rankings.
Luckily, you don’t need the resources of Google to spot altered media. Here, we’ll examine the primary differences between cheapfakes and deepfakes as well as the AI-based tools that can be used to decisively detect them.
Cheapfakes are media that have been manipulated through inexpensive, more accessible means like commercial photo and video editing software. Many cheapfakes are created with tools like Adobe, which enables the speeding up and slowing down of video, as well as the creation of animation, movement and face swapping. Cheapfakes often have an amateuristic feel to them — in fact, any teenager who has taken basic computer animation classes can create them. They are essentially the equivalent of low-budget special effects, and it’s very easy to produce them with limited skills.
Deepfakes, on the other hand, are more realistic than cheapfakes and much harder to detect. They require a much higher level of training, competency and sophisticated tools to create. Even at the low end, a certain level of programmatic software knowledge is needed. At the higher end, creators require expertise in prompt design and prompt engineering in order to feed this input into generative AI tools, like Sora. The most sophisticated echelon of deepfakes makes use of generative adversarial neural networks (GANs), where two algorithms compete and evolve: one algorithm learns to create deepfakes, while another one learns to detect them. This is where it becomes very difficult to discern between what’s real and what’s not.
The good news in all of this? It’s highly possible to tell the difference between a cheapfake/deepfake and a real piece of content, because these materials often introduce inconsistencies which can be detected.
Lookout for following when trying to determine media’s authenticity:
- Are body and facial positions awkward or unusual in the foreground or background?
- Is the coloring unnatural?
- Does the aging of the skin not match the subject’s eyes or hair?
- Are the people in the video not blinking or blinking unnaturally?
- Is the audio inconsistent with the visuals?
- Are limbs awkwardly blended into the background or appear odd?
We recommend a combination of the good old fashioned human eye – which is usually sharp enough to tell when something is ‘off’ — and one or more of the following AI tools that are very effective in detecting manipulation. This is “using AI to detect AI,” so to speak.
In the example of images, a prime giveaway of a cheapfake or a deepfake is the watermark that’s embedded into the image. Based on these watermarks, tools can easily tell if an image is genuine or not. Fake videos, on the other hand, can be detected through several techniques including:
- Facial landmark analysis, which works by analyzing the positioning of key features on a face in the video, like the eyes, nose and mouth. Facial landmark analysis has demonstrated the capability of identifying video manipulation at the pixel level in order to detect fake face images. Temporal consistency can further analyze how the facial features move over time; cheapfakes and deepfakes will often have subtle inconsistencies in these motions.
- Additionally, flicker detection can find signals consisting with the stitching together of different sources. These tools can detect inconsistencies in lighting or color across a video to point out the possibility of manipulation.
- Another technique focuses on lip movements and lip syncing errors. Speech consists of sounds that correspond to lip shapes. Tools can compare lip movements to sounds; any discrepancy may indicate a cheapfake or deepfake.
- Finally, other tools focus on analyzing variations in grayscale tones within a video. The human eye can actually distinguish about ten different shades of gray, and cheapfakes and deepfakes can introduce inconsistencies in grayscale tones which can be detected.
Given the increased proliferation of cheapfakes and deepfakes and their potential for igniting societal unrest — as well as threatening individual privacy — we believe detection tools need to be deployed more aggressively and proactively. For example:
- Video sharing sites could integrate deepfake detection tools to flag or verify the authenticity of the uploaded content.
- Cheapfake and deepfake detection techniques can become part of educational tools to empower students to critically evaluate online content and recognize deepfakes when they come across them.
- Artists can have their reputation and their art compromised by deepfakes. The art community can adopt cheapfake/deepfake detection to authenticate the origin of digital artwork or determine if a celebrity appearance in a film is real or synthetic.
- Protecting children and those who are vulnerable: deepfake detection can be used to identify and flag deepfakes created for cyberbullying purposes. This could help protect victims and hold bullies accountable.
It’s clear that AI-based deepfake detection technologies are still developing and haven’t yet obtained complete maturity. That is because these solutions are statistical in nature and “guess” whether an image or video is a cheapfake or a deepfake or not, and they may make an incorrect determination. The truth is that cheapfake/deepfake creators and detection tools are in a constant arms race. And what’s clear is the outputs are getting ever-more sophisticated, making it harder for detection tools to distinguish what’s real from what’s fabricated. It is also important to remember that AI detection tools are trained on large datasets of images or videos, and these datasets can introduce bias.
Detection algorithms are only as good as the training data fed into them. New deepfakes are constantly being developed, constantly challenging existing detection tools.The good news is that considerable progress has been made with these tools and every interaction is getting better. However, we still have a ways to go to ensure that society does not fall prey to these manufactured images.
Dr. Mohamed Lazzouni is CTO, Aware.