Table of Contents
- 1 Key Takeaways:
- 2 What Led to the Creation of the No Fakes Act
- 3 What the No Fakes Act Does and Why It Matters
- 4 What Changed in the 2025 No Fakes Act Revision?
- 5 Why the No Fakes Act Matters for Platforms Now
- 6 Platforms Are Supporting the No Fakes Act but Still Struggle With Enforcement
- 7 Where the No Fakes Act Still Falls Short
- 8 The Future of the No Fakes Act and What Still Needs to Be Done
Key Takeaways:
- The No FAKES Act is a U.S. bill that sets federal rules to prevent unauthorized AI replicas of a person’s face, voice, or likeness.
- Tech platforms are backing the bill but lag on enforcement. YouTube, TikTok, Meta, and Spotify have updated their AI policies, yet weak labeling systems and inconsistent moderation still allow synthetic content to circulate unchecked.
- Lawmakers and advocates are calling for revisions to strengthen clarity and fairness. Supporters say the act is needed to protect human likeness, while others warn it must be refined to avoid overreach and ensure fair use remains protected.
As generative AI becomes more widely used, lawmakers are facing growing pressure to clarify how real people should be represented in synthetic media. Existing federal laws governing copyright, publicity rights, and fraud were developed for earlier forms of media and do not clearly address AI-generated replicas of a person’s voice or likeness.
The No FAKES Act is a legislative response to that uncertainty. Introduced in the U.S. Senate, the bill proposes national rules governing the use of a person’s likeness and voice in AI-generated content. It focuses on consent, assigns responsibilities to platforms and distributors, and introduces penalties for unauthorized use.
The proposal has drawn broad attention from lawmakers, creators, and technology companies. Supporters view it as a necessary step toward modernizing identity protections, while critics continue to question its scope and enforcement. To understand why the bill has gained momentum, it helps to examine the legal gaps that preceded it and how the legislation attempts to address them.
What Led to the Creation of the No Fakes Act
Before the No FAKES Act, likeness protection in the United States was governed almost entirely by state-level publicity laws. These statutes were designed to address traditional forms of misuse, such as unauthorized endorsements or commercial exploitation, and they vary significantly in scope, remedies, and duration across jurisdictions.
In practice, this created enforcement challenges for online content. A single instance of misuse could involve multiple states, platforms, and distributors, each subject to different legal standards. For individuals seeking recourse, outcomes often depended on where a claim was filed rather than the nature of the misuse itself.
Some states moved to update their laws as these challenges became more apparent. Tennessee’s ELVIS Act expanded protections to cover AI-generated voice imitation, while many other states continued relying on older frameworks that do not clearly account for synthetic replicas or automated content generation.
Courts faced similar difficulties. Cases involving AI-generated likeness produced inconsistent results, often shaped by how narrowly existing definitions were interpreted. These inconsistencies prompted industry groups, including SAG-AFTRA and the Human Artistry Campaign, to push for a federal framework that could provide clearer standards for consent and accountability.
The No FAKES Act emerged from these limitations. Rather than replacing state laws, it seeks to establish a consistent baseline that applies across jurisdictions and provides clearer guidance for individuals, platforms, and courts.
What the No Fakes Act Does and Why It Matters
The No FAKES Act proposes a federal framework governing the use of a person’s face, voice, and likeness in AI-generated content. It applies to both public figures and private individuals and focuses on situations where likeness is used without consent in synthetic media.
Under the bill, responsibility extends beyond the creator of the content. Platforms, studios, and distributors that publish or host AI-generated material featuring real people would be required to verify consent and take action when violations occur. This shifts part of the compliance burden to entities that control distribution, rather than placing it solely on individuals to pursue takedowns.
Supporters argue that these provisions are necessary to maintain trust as AI-generated media becomes more common. Senator Chris Coons, one of the bill’s sponsors, has emphasized that control over voice and likeness should not depend on fame, reinforcing the idea that identity protections should apply broadly.
The current version of the bill reflects revisions made in response to feedback from creators, unions, and digital rights advocates. Earlier drafts raised concerns about coverage and enforcement, leading lawmakers to refine definitions and clarify platform responsibilities as the proposal evolved.
What Changed in the 2025 No Fakes Act Revision?
In August 2025, lawmakers introduced a revised version of the No FAKES Act in response to concerns raised about the original proposal. The update focuses on clarifying definitions and narrowing some of the gray areas that emerged during early debate.
One of the most notable changes is an expanded definition of “digital replica.” The revised language covers any face, voice, or digital likeness that could reasonably be mistaken for a real person, regardless of how the replica was created. The update also reinforces the requirement for explicit consent before a likeness can be generated, distributed, or monetized using AI tools.
The revision introduces limited obligations for platforms, requiring them to act once notified of unauthorized content. However, it stops short of outlining clear standards for detection, verification, or proactive enforcement. Legal analysts, including commentary published by The Regulatory Review, have noted that the bill continues to preempt certain state laws without providing detailed federal mechanisms to replace them.
Questions also remain around how the law would apply to parody, fair use, and artistic reinterpretation. Critics argue that while the revision improves clarity, it still relies heavily on reporting and voluntary compliance. Without stronger technical or procedural guidance, enforcement outcomes may vary once the law is tested in practice.
Why the No Fakes Act Matters for Platforms Now
As the No FAKES Act moves forward, attention is turning to how it would be applied in real situations. The revised language places clearer expectations on how unauthorized likeness use should be handled, making platforms a central point of accountability.
That role matters because AI generated content already influences how people judge credibility and authenticity online. Platforms that host or distribute this material are increasingly expected to manage the risks that come with it, especially when real individuals are involved.
The case of George Carlin illustrates what is at stake. In 2024, the comedian’s estate reached a settlement with the creators of a podcast that used AI to imitate his voice without permission. Although the project was framed as a tribute, it raised serious ethical and legal concerns and was ultimately removed. If one of the most recognizable voices in entertainment can be replicated so easily, similar misuse can affect anyone.
Under the No FAKES Act, platforms that fail to respond to unauthorized likeness use could face financial penalties and legal liability. The bill allows individuals to pursue civil action against parties that knowingly create, host, or distribute AI generated replicas without consent. This structure places responsibility not only on creators, but also on the systems that enable distribution.
By clarifying how likeness may be used and who is accountable when boundaries are crossed, the Act gives platforms a shared foundation for building consent verification, content review, and response processes. Companies that adapt early will be better positioned to meet regulatory expectations and preserve trust as AI generated media becomes more widespread.
Platforms Are Supporting the No Fakes Act but Still Struggle With Enforcement
Several major technology and media companies, including Google, Disney, and YouTube, have publicly expressed support for the No FAKES Act. Their backing reflects a broader recognition that AI-generated impersonation creates real risks for creators, platforms, and audiences alike.
At the same time, translating that support into consistent platform practices has proven difficult. Policies, labeling tools, and detection systems vary widely, and enforcement often depends on how and where content is created or uploaded. As a result, unauthorized likeness use can still circulate before platforms are able to respond.
These challenges become more apparent when examining how individual platforms are handling AI-generated content today.
1. TikTok
TikTok has made some progress by labeling AI-generated content using embedded metadata and by joining the Coalition for Content Provenance and Authenticity (C2PA), an industry initiative aimed at building standards for tracking digital content origins.
Despite this, TikTok’s labeling is often limited to content created with in-app tools. Many videos created outside the app are uploaded without any form of disclosure, and moderation teams can’t always catch synthetic content in time. TikTok’s AI labeling policies remain a work in progress, especially given how fast viral trends and challenges move across the platform.
2. Meta (Facebook and Instagram)
Meta has introduced “Imagined with AI” labels on some images created with generative tools and says it plans to expand this labeling to video and audio. It has also committed to watermarking AI-generated content shared on its platforms.
But user awareness remains low. Many people scroll past AI-manipulated posts without realizing they’ve been altered. And since Meta relies on detection and labeling across a wide range of tools, the system isn’t always consistent. Without visible warnings or clear education, users may not be equipped to question what they’re seeing.
3. Spotify
Spotify has taken a firm stance against impersonation. In 2023, it removed AI-generated songs that copied the voices of major artists like Drake and The Weeknd. It also updated its terms of service to prohibit content that mimics real individuals without permission.
But Spotify hasn’t introduced much transparency for artists or listeners. Artists can’t easily track how or where their voices are being misused, and users are rarely told when a song is generated by AI. Without detection tools or visible labels, Spotify’s policies rely heavily on user reports and public backlash.
4. Twitch and Livestreaming Platforms
Twitch, and similar live content platforms, are among the least prepared to handle real-time AI abuse. While Twitch has community guidelines banning misinformation and harmful content, it doesn’t have clear policies or tools specifically aimed at AI-generated impersonation.
This is especially concerning for livestreamers, who are increasingly vulnerable to voice cloning and deepfake avatars being used either in their own streams or to mimic them elsewhere. Real-time moderation is a significant technical challenge, but it’s one that Twitch and others will need to address as AI becomes easier to use on the fly.
Where the No Fakes Act Still Falls Short
Despite growing support from platforms and lawmakers, enforcement challenges have exposed limitations in the No FAKES Act itself. While the bill introduces important protections, many experts and creators note that several areas still lack clarity. These gaps affect how the law would function in real-world settings.
1. Defining What Counts as a Likeness or Imitation
AI systems can now reproduce a person’s image, voice, or recognizable traits using minimal source material. Without a clear legal definition of what qualifies as unauthorized imitation, enforcement risks becoming inconsistent. Ambiguity around where acceptable reference ends and misuse begins could leave both creators and platforms unsure of their responsibilities.
2. Clarifying Exceptions for Satire and Fair Use
Another unresolved issue involves how the Act treats parody, satire, and commentary. Free expression groups have raised concerns that broad restrictions could unintentionally affect legitimate creative work. Lawmakers continue to weigh how to protect individuals from misuse while allowing space for artistic and cultural expression that does not rely on deception.
3. Building Real Enforcement Infrastructure
The Act establishes legal rights, but it does not yet specify how violations should be identified and addressed at scale. Detection still depends on emerging tools such as watermarking, provenance tracking, and digital registries. Without shared technical standards, much of the burden may remain on individuals to find and report misuse.
4. Balancing Accountability With Creative Opportunity
Some policymakers and advocacy groups have called for refinements to ensure the Act does not extend beyond its intended scope. Senator Chris Coons has acknowledged ongoing discussions around adjustments that would make the law more flexible as technology changes. Groups such as SAG-AFTRA and the Human Artistry Campaign continue to advocate for language that enforces consent while supporting creative work that operates within clear boundaries.
The Future of the No Fakes Act and What Still Needs to Be Done
The No FAKES Act is one of the strongest efforts so far to establish federal protections for how likeness and voice are used in AI-generated media. It reflects a broader shift toward treating digital replicas as an issue of identity and consent, rather than isolated incidents of misuse.
How effective the law becomes will depend on how clearly its standards are applied across platforms and how consistently responsibility is enforced. Questions around scope, exceptions, and accountability remain unresolved as the bill continues to move forward.
Ultimately, meaningful likeness protection will require more than legislation alone. Clear consent signals, accountable distribution practices, and shared expectations around how identity can be used all play a role. If these elements align, the No FAKES Act could help create a more reliable foundation for protecting both public figures and everyday individuals as AI-generated media continues to evolve.