What Is the No Fakes Act?

Why the No Fakes Act Is Pushing Platforms to Act on AI

Lauren Hendrickson
June 4, 2025

Table of Contents

Key Takeaways:

  • The No Fakes Act is closing a major legal gap around AI-generated content. This bill gives creators and everyday users stronger protection against deepfakes and unauthorized use of their voice or image.
  • Big Tech platforms are backing up the No Fakes Act, but still lack strong enforcement. YouTube, TikTok, Meta, and Spotify have introduced AI policies, but inconsistent labeling and weak detection tools leave users exposed to synthetic content.
  • Agencies and industry groups are leading the push for ethical AI in entertainment. The Human Artistry Campaign and Creative Artists Agency (CAA) are promoting consent, fair use, and digital identity protection.

 

Artists, actors, and musicians have long voiced concerns about AI tools that can mimic their voices, faces, and performances without consent, as noted in our previous article. Today, those concerns are turning into real challenges. Platforms, agencies, and creators are starting to see that AI-generated content is not just a creative issue. It also raises real business and legal concerns.

As synthetic media becomes more common on platforms like YouTube, Spotify, and TikTok, it’s becoming harder to tell what’s authentic. The conversation is shifting beyond artistic control to include how content is managed, how trust is maintained with users, and how companies respond when things go wrong.

In response, more people across the entertainment industry are calling for clearer protections like the No Fakes Act. The pressure is growing for platforms to take responsibility and move away from a passive approach.

The Legal Gap That Led to the No Fakes Act

The No Fakes Act didn’t come out of nowhere. It followed years of trying to apply outdated laws to modern problems—issues these laws were never meant to cover. As AI tools became faster, cheaper, and easier to use, the gaps in legal protection became too big to ignore.

For a long time, protecting someone’s name, image, or voice relied on what’s known as the “right of publicity.” States like California and New York created their own versions. For instance, California’s Civil Code Section 3344 makes it illegal to use someone’s likeness in advertising without permission. But these laws were written before the internet, and long before AI made it possible to clone a voice or create a deepfake video in minutes.

Without a national standard, the rules varied. Some states offered strong protections, while others didn’t. And most of those existing laws never considered AI-generated content at all.

As the technology improved, the issues got worse. Deepfake videos of politicians started going viral. AI-generated songs that copied real artists began showing up online. In some cases, entire albums were posted under a famous name, even though no human vocals were used. Groups like SAG-AFTRA and the Human Artistry Campaign began speaking out, pointing out that the misuse of someone’s identity wasn’t just a celebrity problem—it could affect anyone.

Courts had a hard time keeping up. Some lawsuits moved forward, but many were dismissed because the laws didn’t quite fit. The legal system wasn’t built for this kind of technology. Eventually, lawmakers began to see the need for a new solution. That’s when the idea for the No Fakes Act started to take shape.

What Is the No Fakes Act and Why It Matters 

The No Fakes Act is a federal bill introduced in 2023 by a bipartisan group of U.S. senators. Its main goal is to prevent the unauthorized use of a person’s voice, face, or likeness in AI-generated content. This includes fake ads featuring a celebrity’s voice, AI-generated songs that imitate well-known artists, or videos that falsely show someone endorsing something without their permission.

This kind of content is becoming more common, and it’s testing the limits of today’s laws. The No Fakes Act is meant to fill those gaps. It gives stronger protection to individuals and puts more responsibility on platforms, studios, and agencies that host or publish this kind of material. These organizations may need to upgrade their tools, set clearer policies, and be more transparent about what they allow.

The bill also challenges the idea that platforms are just neutral spaces. As synthetic content spreads, it’s harder for them to argue they have no control over what’s shared. Senator Chris Coons explained it clearly: “Everyone deserves the right to own and protect their voice and likeness, no matter if you’re Taylor Swift or anyone else.” His comment reflects the growing push for platforms to protect both public figures and everyday people from being misrepresented.

Why the No Fakes Act Matters for Platforms Now

The rise of fake or altered content is no longer a future concern. It is already reshaping legal, cultural, and commercial systems. For platforms, the pressure is growing. Synthetic media is making it harder to know what is real and is challenging how companies manage content and build trust with users.

The case of George Carlin is a clear warning. In 2024, the comedian’s estate settled with the creators of a podcast that used AI to simulate his voice without consent. While framed as a tribute, the episode raised public and legal concerns and was ultimately taken down. If the likeness of a well-known figure can be used without permission, everyday users are even more vulnerable.

This is where the No Fakes Act becomes especially relevant. It introduces clear legal boundaries for how someone’s voice, image, and likeness can be used—giving platforms a foundation for policy and enforcement that doesn’t yet exist in many jurisdictions. The law helps shift responsibility from users to the systems that host and distribute content.

Platforms that wait for perfect detection tools or public pressure before acting risk losing credibility. The No Fakes Act offers a proactive framework for defining digital consent and preventing misuse. For companies navigating the future of content moderation, now is the time to align with legislation that supports accountability and protects identity at scale.

Tech Platforms Support the No Fakes Act but Struggle to Enforce It

Major platforms are starting to take the risks of AI-generated content more seriously. YouTube, for example, has introduced clearer rules. In early 2024, it rolled out policies that require creators to label videos that include altered or AI-generated content—like cloned voices, synthetic faces, or edited scenes that could mislead viewers. Creators who do not follow these rules risk having their content removed or facing penalties.

This reflects a broader shift in how the industry is responding. Companies like Google, Disney, and YouTube have voiced support for the No Fakes Act, signaling that synthetic media is no longer seen as just a creative or celebrity issue. Instead, it’s being treated as a growing challenge that affects brand reputation, legal risk, and public trust.

But even as support grows, platforms are still figuring out how to respond in practice. Enforcement is proving difficult for a few key reasons. Detection tools are still in early stages. Not all AI-generated content includes metadata or visible markers. And with the speed and volume of uploads, it’s hard to catch everything before it spreads.

Still, many platforms are having a hard time keeping up:

1. TikTok

TikTok has made some progress by labeling AI-generated content using embedded metadata and by joining the Coalition for Content Provenance and Authenticity (C2PA), an industry initiative aimed at building standards for tracking digital content origins.

Despite this, TikTok’s labeling is often limited to content created with in-app tools. Many videos created outside the app are uploaded without any form of disclosure, and moderation teams can’t always catch synthetic content in time. TikTok’s AI labeling policies remain a work in progress, especially given how fast viral trends and challenges move across the platform.

2. Meta (Facebook and Instagram)

Meta has introduced “Imagined with AI” labels on some images created with generative tools and says it plans to expand this labeling to video and audio. It has also committed to watermarking AI-generated content shared on its platforms.

But user awareness remains low. Many people scroll past AI-manipulated posts without realizing they’ve been altered. And since Meta relies on detection and labeling across a wide range of tools, the system isn’t always consistent. Without visible warnings or clear education, users may not be equipped to question what they’re seeing.

3. Spotify

Spotify has taken a firm stance against impersonation. In 2023, it removed AI-generated songs that copied the voices of major artists like Drake and The Weeknd. It also updated its terms of service to prohibit content that mimics real individuals without permission.

But Spotify hasn’t introduced much transparency for artists or listeners. Artists can’t easily track how or where their voices are being misused, and users are rarely told when a song is generated by AI. Without detection tools or visible labels, Spotify’s policies rely heavily on user reports and public backlash.

4. Twitch and Livestreaming Platforms

Twitch, and similar live content platforms, are among the least prepared to handle real-time AI abuse. While Twitch has community guidelines banning misinformation and harmful content, it doesn’t have clear policies or tools specifically aimed at AI-generated impersonation.

This is especially concerning for livestreamers, who are increasingly vulnerable to voice cloning and deepfake avatars being used either in their own streams or to mimic them elsewhere. Real-time moderation is a significant technical challenge, but it’s one that Twitch and others will need to address as AI becomes easier to use on the fly.

How the Human Artistry Campaign Is Shaping Industry Standards

As platforms work to catch up on enforcement, industry groups are stepping in to offer a clearer path forward. One leading example is the Human Artistry Campaign, supported by organizations like the RIAA, SAG-AFTRA, and Universal Music Group. The initiative focuses on making sure AI tools are used in ways that support artists rather than replace or exploit them.

The campaign promotes seven key principles. These include the need to get permission before using someone’s voice or image, credit original creators, and ensure artists are paid fairly. These principles give companies and platforms a framework for using AI in ways that respect human work.

Dr. Moiya McTier, a senior advisor to the campaign, described the importance of these protections: “The NO FAKES Act is an important step toward necessary protections that also support free speech and AI development.”

Beyond promoting values, the campaign advocates for practical solutions. It urges companies to build detection tools for unauthorized use, update policies to reflect emerging AI risks, and foster transparent, creator-focused environments. It also works with lawmakers to shape policies that align with the needs of artists in an evolving digital landscape. Through these efforts, the Human Artistry Campaign is helping the creative industry adapt to AI by setting ethical standards and protecting the rights of those who make the content.

Talent Agencies and Labels Adapt to Protect Artist Likeness

As advocacy groups set ethical standards, entertainment companies are translating those principles into action. Talent agencies and record labels are updating their roles to better protect the artists they represent in a landscape shaped by AI. Their shared goal is to give artists more say in how their likeness and creative work are used in synthetic media.

Talent agencies such as Creative Artists Agency (CAA) are now helping clients manage digital risks alongside traditional career support. This includes monitoring for unauthorized use of a client’s voice, face, or performance online and taking action when necessary. Protecting a person’s digital identity has become a regular part of modern talent representation.

Record labels are also taking steps to address these concerns. Some have started negotiating licensing deals with AI music companies to define how copyrighted music can be used. For example, several major labels have entered discussions with Udio and Suno—two generative music startups that create songs based on text prompts and musical styles. These talks are focused on setting clear terms for when and how AI can reuse recorded music. This approach allows labels to explore new technology while still defending the rights of the artists they represent.

Although agencies and labels may use different strategies, they are working toward the same outcome. They aim to protect creative identity, maintain control over how likeness and content are used, and ensure that artists are involved in every step of the process. The industry is shifting toward long term systems that focus on consent, accountability, and artist involvement. The goal is to prevent misuse before it happens, rather than only responding after the fact.

Conclusion

The line between real and artificial is getting harder to see. As AI tools become easier to use, copying someone’s voice or appearance is no longer something only professionals can do. Anyone with a smartphone or laptop can now create content that looks and sounds real.

This growing accessibility raises serious questions about how identity is used online. Without clear rules, the chances of misuse go up—not just for celebrities, but for anyone. Platforms, agencies, and tech companies can’t rely on outdated policies or uneven enforcement. They need consistent standards and proactive steps to manage how synthetic content is created, labeled, and shared.

The No Fakes Act helps create that structure by introducing federal protections for voice, image, and likeness. It outlines expectations around consent and authenticity—both of which are becoming more important to how people experience content online. Whether people trust what they see and hear will depend on the systems that support it. Companies that prioritize transparency and accountability will help create a safer, more trustworthy digital environment. Those that wait risk letting the problem grow unchecked.

Identity.com

Identity.com, as a future-oriented organization, is helping many businesses by giving their customers a hassle-free identity verification process. Our organization envisions a user-centric internet where individuals maintain control over their data. This commitment drives Identity.com to actively contribute to this future through innovative identity management systems and protocols.

As members of the World Wide Web Consortium (W3C), we uphold the standards for the World Wide Web and work towards a more secure and user-friendly online experience. Identity.com is an open-source ecosystem providing access to on-chain and secure identity verification. Our solutions improve the user experience and reduce onboarding friction through reusable and interoperable Gateway Passes. Please get in touch for more information about how we can help you with identity verification and general KYC processes.

Join the Identity Community

Download our App