How to knock it down behavior deals with involuntary deep fake porn - and how it is not enough

In a rare bipartisan move, the U.S. House of Representatives voted 409-2 on April 28, 2025 to pass the "The It tot It". The bill is in the face of efforts of one of the most shocking abusers on the internet: the viral spread of involuntary sexual images, including deep erotic pornography generated by AI and real photos, including Revenge and Repenge Ensenne.

Now, the bill now awaits President Trump’s expected signature, providing victims with a mechanism to force the platform to remove intimate content shared without permission and to get those responsible for allocating their accounts.

As an academic focused on AI and digital harm, I think the bill is a key milestone. However, it leaves a disturbing gap. Without stronger protection and stronger legal frameworks, the law may ultimately provide promises that it cannot deliver. Law enforcement issues and privacy blind spots may make victims equally vulnerable.

Take It Down Act’s goal is “involuntary intimate visual description,” a legal term that covers what most people call revenge porn and Deepfake Porn. These are sexual images or videos, usually digitally manipulated or completely made, distributed online without the consent of the person described.

The bill forces online platforms to build a user-friendly deletion process. When the victim submits a valid request, the platform must act within 48 hours. Failure to do so could trigger FTC enforcement, which could treat violations as unfair or deceptive conduct or practice. Criminal penalties also apply to those who publish these images: if someone under the age of 18 is involved, the offender may be fined and face up to three years in prison, and if the subject is an adult, it may be fined up to three years.

An increasingly serious problem

Deepfake porn is not only a niche issue. This is a transfer crisis. With increasingly powerful and easy-to-use AI tools, anyone can create surreal sex images in minutes. Public figures, former partners, and especially minors have become routine goals. Women are disproportionately hurt.

These attacks eliminate life. Victims of involuntary intimacy abuse suffer from harassment, online stalking, disruption of work prospects, public humiliation and emotional trauma. Some were kicked out of the internet. Others reappeared by re-laying content. Once online, these images are uncontrollable - they don't simply disappear.

In this case, a rapid and standardized evacuation process can provide severe relief. The bill's 48-hour response window has the potential to reclaim control clips for those who have been clicked to invade dignity and privacy. Despite the promise, unresolved legal and procedural gaps will still hinder their effectiveness.

[embed]https://www.youtube.com/watch?v=q9hyhplafzo[/embed]

NBC News outlines Take It Down Act.

Blind spots and shortages

The bill only targets public-oriented interactive platforms that host user-generated content, such as social media platforms. It may not reach countless hidden private forums or encrypted peer-to-peer networks, which often appear first. This creates a critical legal gap: When involuntary images are shared on closed or anonymous platforms, victims may never know or know the existence of the content in time, let alone have the opportunity to request the removal of their content.

Even on platforms covered by the bill, implementation can be challenging. Determining whether online content portrays the person concerned, lack of consent and affects difficult-to-define privacy rights requires careful judgment. This requires legal understanding, technical expertise and time. But the platform must make the decision in 24 hours or less.

Time, on the other hand, is something that luxury victims do not have. However, even with a 48-hour disassembly window, the content can still be extensively expanded before removing it. The bill does not include meaningful incentives for platforms to actively detect and delete such content. And it doesn't have a strong enough deterrent to prevent most malicious creators from generating these images first.

This knockdown mechanism can also be abused. Critics warn that the bill's wide language and lack of safeguards could lead to over-censorship and could impact news and other legal content. Since platforms can be flooded with real and malicious revocation requests (some are made by maliciously suppressed speech or art - they may resort to poorly designed intrusive automatic monitoring filters that tend to issue blanket rejections or errors within the scope of deleting deleted content outside the scope of the legal scope.

Without clear standards, the platform may be inappropriate. How the FTC will assume the platform responsible under the bill is another open question.

Victim burden

The bill also puts the burden of action on victims who have to find content, complete paperwork, explain that it is involuntary and submits personal contact information - often still suffering emotional damage.

Furthermore, while the bill targets deep cakes generated by AI and revenge porn that involve real images, it cannot address the complex reality facing victims. Many people are trapped in unequal relationships and may “consent” under pressure, manipulation or fear to make them post intimate content about them online. This situation is not outside the legal framework of the bill. The bill prohibits consent obtained through public threats and coercion, but it ignores more sinister forms of manipulation.

Even for those who do participate in the revocation process, there are risks. The victim must submit contact information and a statement that the image is involuntary and there is no legal guarantee that this sensitive data will be protected. This exposure could cause new waves of harassment and exploitation.

The criminal's loophole

The bill includes liability – conditions of onset and exceptions that can lift distributors from liability. The consequences under Take It Down Act may be avoided if the content is shared by subjects’ consent, attention to the public, or if it is unintentionally or without obvious harm. If the offender denies the injury is caused, the victim will face a tough battle. Emotional distress, reputational damage, and career setbacks are real, but they rarely have clear documentation or direct chains of causality.

Also with regard to the Act, the Act allows the publication of exceptions to such content for legal medical, educational or scientific purposes. Despite a kind attitude, this language creates confusing and potentially dangerous loopholes. It has the potential to become a shield of exploitation disguised as research or education.

Solve the problem

The notification and revocation mechanisms are fundamentally reactive. Intervention is only made after the damage begins. But Deepfake porn is designed for rapid spread. By the time the next request is made, the content may have been saved, replayed or embedded in dozens of sites - some hosted overseas or buried in a decentralized network. The current bill provides a system for treating symptoms while jeopardizing hazards.

In my research on the harms of algorithms and AI, I believe that legal responses should go beyond reactive actions. I propose a framework that anticipates harm before it happens - not just a framework for responding after the fact. This means incentivizing platforms to take proactive steps to protect the privacy, autonomy, equality and security of users who are harmed by AI-generated images and tools. This also means expanding the accountability system to cover stronger safeguards and support from law enforcement systems to cover more perpetrators and platforms.

Knockdown behavior is a meaningful first step. But to truly protect the vulnerable, I think legislators should build stronger systems - those that prevent harm before it happens and regard the privacy and dignity of the victims as an afterthought, but as a fundamental right.