What Is the Take It Down Act?

The Take It Down Act and the Future of Platform Accountability

Lauren Hendrickson
July 1, 2025

Table of Contents

The spread of AI tools has made it easier to create fake images, videos, and voice recordings that look and sound real. While some of these tools are used for entertainment or creative work, they are also being used to harm people. This is especially true in cases where fake intimate images and videos are shared without consent.

For a long time, people targeted by this kind of content have had few ways to get it taken down quickly. Many platforms have been slow to act, and laws have not kept up with how fast this technology has changed. In many cases, the content stays online long enough to cause serious damage, especially when it spreads widely before it is removed.

As these problems have grown, so has public concern. News stories about deepfakes and online abuse have raised questions about what tech companies should be doing to stop it.  Lawmakers are beginning to respond with new rules that aim to hold platforms responsible for what they host and how quickly they remove harmful content. 

One of the biggest changes arrived recently with a new federal law that focuses on protecting people from this kind of abuse. Passed with broad bipartisan support, it marks a shift in how lawmakers address synthetic abuse online. In the next section, we look at what this law requires and how it could shape the future of online safety.

What Is the Take It Down Act?

The Take It Down Act, signed into law on May 19, 2025, is the first federal law in the United States to directly target the spread of non-consensual intimate imagery, including content generated or manipulated by artificial intelligence.

The law makes it a crime to knowingly share or threaten to share private intimate images without the person’s consent. That includes real photos and videos, as well as AI-generated deepfakes that falsely depict someone in a sexual context.  It applies to both adults and minors, with tougher penalties when children are involved.

To be considered a violation, the person sharing the content must know it was created or distributed without permission, and the image must clearly be private. Penalties can include fines and prison terms of up to two years, or more in severe cases.

By recognizing that synthetic abuse can be just as harmful as real images, the law fills a serious gap. It gives victims—especially women and minors—real legal protection, moving beyond relying on platform policies alone. It is one of the first major federal efforts to treat nonconsensual deepfake content as a criminal offense.

Why The Take It Down Act Is a Turning Point

The Take It Down Act stands out not just for what it requires, but for how widely it was supported. It passed the Senate without opposition and cleared the House with a vote of 409 to 2. That level of agreement is rare in any policy debate, especially when it involves the responsibilities of tech companies.

The law was shaped by real cases that made the risks hard to ignore. One of the clearest examples is the story of Elliston Berry, a 14-year-old girl whose AI-generated explicit images were spread online without her knowledge. Although the content was fabricated, the impact was real. Her experience drew national attention and showed lawmakers the urgent need for stronger protections.

Major platforms including Meta and Snap voiced support for the bill, joining public figures like Melania Trump, who has made online safety a core issue. Their support reflects a growing understanding that voluntary efforts are no longer enough. As AI-generated abuse becomes more common, platforms are being called on to adopt clearer rules and take more responsibility.

Many see the Take It Down Act as a long-overdue response to the rise of technology-driven abuse. But its broader significance lies in what it represents. The law reflects a shift in how identity, consent, and control are being understood in digital spaces. Lawmakers and the public are beginning to recognize that a person’s digital likeness deserves protection and clear rules.

This law is not a complete solution, but it marks an important step forward. As concerns grow about the misuse of personal images and synthetic content, the public and policymakers expect platforms to do more than follow the law. They expect platforms to build systems that put user safety, consent, and control at the center.

The Potential Upsides of the Take It Down Act

The Take It Down Act is a meaningful step toward giving people more control over how their image and likeness are used online. By turning what used to be a platform-by-platform choice into a legal requirement, it offers a more consistent and reliable path for victims of AI-generated and nonconsensual content. These changes could bring broader benefits for both users and platforms, including:

  • Faster removal can reduce harm: Most viral content spreads in a matter of hours. By requiring removal within 48 hours, the law gives victims a critical window to limit exposure and prevent lasting damage.
  • A nationwide standard replaces fragmented laws:  Before this law, protections varied across states. The Take It Down Act creates a consistent, federal process for handling non-consensual and AI-generated content, helping close legal gaps that left many without clear recourse.
  • A formal, enforceable process for victims: Victims now have a reliable way to request content removal. This legal backing gives them more control over how their image is used and ensures platforms cannot ignore valid requests without consequences.
  • An opportunity to build public trust:  Platforms that respond quickly and transparently may regain trust from users who feel vulnerable to synthetic abuse. Clear rules and visible action signal that a platform takes identity-based harm seriously.

What Are the Concerns Around the Take It Down Act?

While the law brings much-needed protections, it also raises concerns that deserve close attention. Civil liberties groups, privacy advocates, and legal experts have pointed out areas where the law’s broad scope may create risks if not handled carefully. For example:

  • Vague definitions could lead to over-removal: Critics note that the law does not clearly define what counts as synthetic or manipulated content. This could push platforms to remove legitimate content out of caution, raising concerns about censorship and freedom of expression.
  • No clear appeals process for mistaken removals:  If content is removed by mistake, there may not be a simple way for users to appeal or recover it. The lack of due process is a concern, especially in gray areas or edge cases.
  • Potential privacy and surveillance risks: To meet the law’s requirements, platforms may turn to large-scale content scanning or detection tools. Privacy advocates argue this could compromise encrypted services and create new forms of content surveillance, even in spaces that were once considered private.

What the Take It Down Act Requires of Platforms

Beyond holding individuals accountable, the law outlines clear responsibilities for platforms that host public content. These new responsibilities are not just suggestions. They are enforceable standards meant to protect people from serious harm.

Platforms that host user content, such as social media sites, video apps, or online sharing services, must now take specific steps when they receive a valid takedown request. These include:

  • Removing the content within 48 hours of receiving a proper notice
  • Blocking the same content from being uploaded again
  • Providing clear instructions on how users can request a removal
  • Accepting requests from victims or someone authorized to act for them
  • Creating a working removal process within one year of the law being passed

These rules do not apply to private messaging apps or internet service providers. They are meant for public platforms where harmful content can be widely shared.

Enforcement will fall under the Federal Trade Commission, which has the authority to issue civil penalties against companies that fail to meet these requirements. While platforms are encouraged to act quickly and in good faith, that is no longer just a best practice. It is now a matter of legal compliance.

For many platforms, following the law may require more than policy updates. It will likely involve new technology, faster review systems, and well-trained teams that can respond in a short time frame.

What Platforms Must Do to Stay Compliant with the Take It Down Act

To meet these legal obligations, platforms need more than a basic takedown button. They must create systems that can confirm identity, trace where a file came from, and stop the same content from being shared again. These changes mark a new approach to online safety that emphasizes clear rules, stronger protections, and user control.

Here are five areas platforms can focus on:

1. Build secure takedown systems that verify identity and consent

Platforms must provide a way for users or their authorized representatives to submit takedown requests and prove their connection to the content. This could involve biometric checks, digital credentials, or other trusted tools that confirm the person requesting removal is the one depicted. These safeguards help prevent abuse and support fair enforcement.

2. Implement systems to verify content origins

Platforms must determine whether content is real, altered, or AI-generated. They can use techniques such as digital watermarking, content hashing, or transparency standards like the C2PA framework. The C2PA framework attaches secure metadata to files, showing how and when the content was created or changed. These tools help trace the source and integrity of media files, separate genuine harm from false reports, and support efforts to block reposts.

3. Use AI and content-matching tools to block reposts

A core part of the law is preventing the same harmful material from being uploaded again. Platforms can follow the example of YouTube’s Content ID system, which scans new uploads against a database of flagged content and blocks or flags matches. A similar fingerprinting system for nonconsensual content can help enforce takedown requests and stop future violations.

4. Train moderation teams on legal and technical standards

Human moderators play a key role, especially when context matters. Teams need clear training on how to meet the legal requirements of the Take It Down Act, how to recognize synthetic or manipulated media, and how to respond within the 48-hour window.

5. Maintain transparent records and audit trails

Platforms should document takedown activity, including how identity was verified, what actions were taken, and what tools were used. Keeping clear records supports regulatory compliance, builds user trust, and prepares platforms for legal reviews.

What This Could Mean for the Future of Content Moderation

The Take It Down Act is not just a one-off policy. It signals a growing change in how lawmakers, regulators, and the public think about online content—especially as artificial intelligence becomes more common. Content moderation is starting to evolve in response. We are beginning to see:

1. Growing Federal Momentum

New proposals are reinforcing this shift. The No Fakes Act, for example, would ban the unauthorized use of a person’s name, voice, or likeness in AI-generated content. While the Take It Down Act focuses on protecting private individuals and minors, the No Fakes Act addresses impersonation, reputational harm, and commercial misuse involving public figures. Tennessee’s ELVIS Act, passed earlier in 2024, also set a precedent by protecting voice rights in the age of AI. Together, these efforts show that lawmakers are beginning to take digital likeness seriously. More legislation is likely to follow in areas such as biometric cloning, political misinformation, and AI-generated voice fraud.

2. States Moving Ahead

More than 30 states have already passed laws targeting synthetic media, especially deepfakes used in political advertising and nonconsensual intimate imagery. States like California and Oregon now require clear labeling of AI-generated content in campaign and commercial contexts. These state-level efforts are helping shape a patchwork of standards that could influence future federal rules.

3. Regulatory Pressure Is Growing

The Federal Trade Commission is drafting new rules aimed at addressing impersonation, personal data misuse, and fraud tied to AI-generated content. At the same time, federal agencies and the White House have released voluntary frameworks encouraging platforms to watermark synthetic media and disclose when content has been artificially altered. These measures are not yet enforceable, but they reflect growing pressure on platforms to act more transparently.

4. A Shift Toward Verifiable Content

There is a growing push to verify the authenticity and origin of online content. Ongoing discussions around Section 230 include proposals to reduce liability protections for platforms that fail to label or detect manipulated media. The Senate’s AI working group has also recommended using provenance tags and standardized metadata to help users and platforms better distinguish between real and synthetic content.

Conclusion

The Take It Down Act is a real shift in how the United States is thinking about content moderation in the age of AI. It gives people a legal way to get harmful, nonconsensual content taken down, something that used to be left up to individual platforms and their policies.

At the same time, synthetic media has a creative side. Deepfakes and generative tools can be used for storytelling, art, and entertainment. So the question is, how do we protect people’s privacy and identity without losing the potential for innovation? 

The answer starts with clarity and consent. Removing harmful content is only part of the solution. Moving forward, the goal should be a digital environment where people are respected, protected, and able to control how their image is used as technology continues to evolve.

Join the Identity Community

Download our App