The Dark Side of AI: Your Photos Turned Against You
In a few minutes, anyone can create a deepfake video of you or your children in a compromising setting using just one photo. Is your digital identity protected? The answer is no, and here is why.
Deepfake technology opens up great opportunities in many areas of life (education, learning, creativity, to name a few), while also presenting complex issues regarding misinformation, fraud, artists' rights, and copyright. All of these topics deserve a separate deep dive. This post is not intended to be exhaustive on the subject but aims to raise wider awareness of the current state of deepfake nonconsensual imagery threats.
Grok-2 is shockingly "good" and "bad"
The Grok-2 model, announced by Elon Musk's xAI in mid-August 2024, has incredible deepfake capabilities. It is incorporated into X (Twitter) and lets users freely make and share realistic images and videos of public and private figures, which of course they started doing immediately. This has resulted in shocking and often sexualized photos on the platform.
The release of Grok-2 excites and worries many, including myself.
Elon Musk is promoting Grok-2 as a response to digital over-censorship. The model's advanced text and vision understanding, powered by Black Forest Labs' FLUX.1, together with lack of restrictions, immediately raised serious issues, incl. privacy misuse and copyright infringement.
One of the serious issues with deepfakes is nonconsensual imagery and it deserves a lot more attention. The simplicity with which anyone can now generate convincing false images and videos, on X and beyond, has transformed our online presence into potential targets for harassment, extortion, and retaliation, incl. sexual. Women are especially at risk, as are younger people.
The Tech Behind It
Advanced machine learning algorithms like diffusion models and customized large language models are used to create deepfakes. In simple terms, diffusion models learn to denoise photos, while visual LLMs like DALL-E generate images using transformer structures. Large datasets teach these AI systems to grasp and alter high-level features. AI models understand and recreate complex visual components, incl. facial structures, expressions, and lighting. This advanced process creates highly realistic synthetic media that can be hard to differentiate from real content, raising issues about misuse and the need for reliable detection.
How Big Is The Problem?
Majority of malicious deepfakes are created using open-source tools. Once the territory of visual effects companies, the technology underlying deepfakes is now accessible to everyone with a smartphone and bad intention, even if you use proprietary models.
There are millions of deepfake videos circulating online, multiplying in size exponentially every year, and 99% of sexual and 77% of general deepfakes are affecting women. South Korea is most impacted.
"It now takes less than 25 minutes and costs $0 to create a 60-second deepfake pornographic video of anyone using just one clear face image," Security Hero’s research shows.
While nonconsensual pornography is a large problem, deepfakes go beyond that. Technology has reduced people's control over how their photos and likenesses are exploited—all can be used for various forms of disinformation and sabotage.
It is on the victim to prove that a harmful video or image is not real while seeking removal and justice, but these sophisticated fakes are often hard to identify. There is no common, reliable detection framework, making this challenge harder.
Tech companies, regulators, and developer communities must collaborate to establish effective mechanisms to track, detect, report, control, and prevent nonconsensual deepfakes. Implementing such controls is difficult, but necessary to protect privacy and sustain trust in digital media. Beyond personal harm, the threat could damage public trust in visual evidence, affecting journalism and law enforcement.
The Legal State
If we look closer at the current state of deepfake regulations, we see how we are behind in addressing the issue.
US
At the federal level, there are two notable legislative efforts to address the issue of deepfakes:
DEEP FAKES Accountability Act: This law aims to preserve national security and offer legal recourse to victims by making dangerous deepfakes unlawful. Congress introduced the Act to criminalize deepfakes created or shared to deceive or damage others. However, this Act has not yet become law.
DEFA (Deepfakes and False Acumen Act): This law, passed by the U.S. Senate, allows victims of non-consensual deepfakes to sue their creators and distributors. This allows victims to pursue civil remedies against criminals. The Act must pass the House of Representatives and the President's signature to become federal law, but its Senate passage is significant.
Additionally, some states have introduced their own laws against deepfakes:
California, Texas, and a few other states have enacted laws specifically targeting the use of deepfakes in elections and pornography.
New York offers more comprehensive protection, making it illegal to create and distribute any deepfake without the subject's consent.
EU
The EU's AI Act, finalized in December 2023, demands that AI-generated content, particularly deepfakes, be explicitly labeled. The Act, however, does not explicitly prohibit the design or use of deepfakes. They can still be created and shared, if properly labeled. Deepfake violations around your personal image could potentially be applied within local defamation, privacy rights, and personal image laws. However, the burden to discover, detect and proof the deepfake is actually fake lies on the affected individuals.
Other countries
China has some of the strictest deepfake tech laws. From January 2024, beyond labeling requirements, deepfake development and dissemination without user consent are prohibited and require content platforms to monitor and regulate them. Deepfake content for fraud or deception is illegal.
The South Korean government has responded with strict laws and severe penalties to address a huge volume of sexual deepfake crimes in the country. However, South Korean laws only cover pornography, not political manipulation or misinformation.
India lacks deepfake laws, leaving its current protections ineffective until new regulations, being developed in response to the 2024 elections, are enacted.
Tech Giants: Part of the Problem or the Solution?
Deepfakes are largely spread through global platforms, which are actively working on developing internal deepfake detection tools, some of them accessible to public. In July 2023, OpenAI, Google, Microsoft, Meta, Amazon, and others signed "Voluntary AI Commitments" to promote AI's ethical and safe progress. To help users detect deepfakes, they agreed to recognize and watermark AI-generated content, as well as to share innovations and best practices with each other and the public. Still, much more could be done.
Meta's "Take It Down" effort with the National Center for Missing and Exploited Children (NCMEC) to remove NCII from Facebook and Instagram is its most prominent. Personal image "hashes" can be submitted to the program to prevent online sharing. Hashes allow platforms to detect and stop the circulation of specific images without storing the photos. Meta's systems may utilize a hash's fingerprint to detect and reject any attempt to post the matching picture on its platforms, preventing NCII from spreading.
Google and Jigsaw released a FaceForensics benchmark-boosting deepfake detection dataset. This dataset, generated utilizing deepfake generation methods, is publicly available to researchers to help improve detection tools. Google also released a synthetic voice dataset to counteract AI-generated material misuse.
Microsoft created a tool to evaluate videos' manipulation, though only available to organisations involved in the democratic process.
On TikTok, users must mark all synthetic or modified media showing realistic scenes, undisclosed content is removed by the platform (we can't judge how effectively).
X (Twitter)'s current efforts are more counterproductive at the moment.
Open Questions Remain:
1. Access to Deepfake Detection Tools
Deepfake detection is becoming an AI-era “secret weapon.” Open-sourcing promotes innovation, democratizes detection tools, and makes necessary defenses available. But it also risks exposing vulnerabilities for bad players. As with any other powerful tech that can be used for good or bad, these questions remain valid:
What is the right balance between innovation and security? And if security requires some level of control, who holds it, and can we trust them?
2. Enforcement and Collaboration Gaps
We can probably agree that we need to develop robust laws that penalize the creation and intentional distribution of malicious deepfakes, but we should also be able to enforce them.
Should platforms be obligated to implement detection and reporting methods, and to what extent? How do we ensure balanced regulation without suffocating innovation, while reducing jurisdictional loopholes with effective international collaboration?
Most significantly, does the global tech community need legislation to become more engaged and aligned? Deepfake detection is an AI hidden weapon, so to what extent is it appropriate and beneficial for companies to share the advancements in this field among huge players? When will we see "Voluntary AI Commitments" have more impact?
3. Large and Small Players
Training deepfake detection models requires large real and fake content datasets. Unless open-sourced, acquiring and labeling such data is resource-intensive. It might be easier for large organisations to lead in this field, but smaller developers can bring originality and agility.
How can these groups collaborate? Will small-developer innovation meaningfully address the issue, or will this remain a game predominantly for those with large funds and superior infrastructure?
Final Word
As AI reshapes our lives, tech companies, governments, and society - all must ensure that the ideals of privacy, consent, and human dignity are not lost.