California's AI Transparency Act (AB 853)

California’s AI Transparency Act AB 853 Explained

Lauren Hendrickson
December 9, 2025

Table of Contents

Key Takeaways:

  • California’s AI Transparency Act (AB 853) creates new rules for identifying AI-generated content. The law focuses on transparency and provenance so people can understand when digital material has been shaped or altered by AI.
  • The law applies in phases to generative AI providers, large online platforms, and device manufacturers. Each group must adopt different requirements at specific times, beginning in 2026 and extending through 2028. 
  • Companies must attach, preserve, and display provenance signals that show when content was created or altered with AI. They also need to provide the public with reliable tools to verify whether a piece of media is synthetic.

Why Did California Pass AB 853?

People are seeing more digital content that looks real but may not be. Voices that resemble public figures, videos that appear authentic, and ads shaped with synthetic elements have created new uncertainty about what can be trusted online. This shift has prompted lawmakers, platforms, and creators to examine how audiences can better understand when technology has influenced what they see.

California has begun addressing this challenge by exploring ways to make the origins of digital content easier to trace. The state has a long record of setting early standards in privacy and consumer rights, and its entertainment and technology sectors feel the impact of synthetic media earlier and more sharply than most. Actors, musicians, studios, and digital platforms have raised concerns about how quickly AI-generated material can circulate and how difficult it can be for audiences to recognize when something has been altered.

Lawmakers are trying to improve visibility into how digital material is produced so people have clearer context behind what appears on their screens. That effort led to new transparency requirements intended to give users a better understanding of when AI has shaped or modified content. The next section outlines what the AI Transparency Act, known as AB 853, introduces within this broader response.

What Is California’s AI Transparency Act (AB 853)?

The AI Transparency Act, known as AB 853, is a California law passed in 2025 that focuses on increasing clarity around the production of digital content. It establishes transparency and provenance standards for organizations involved in creating or distributing material that may contain synthetic elements. California details these responsibilities in its official bill text, which is available through the state’s public record.

Unlike legislation centered on creative ownership or likeness rights, such as the No Fakes Act or the ELVIS Act, AB 853 is focused on traceability. It calls for clearer signals that indicate when AI has contributed to an image, video, or audio clip. These signals are meant to help viewers understand how a piece of media was produced, regardless of whether it appears in a social feed, an advertisement, or on a consumer device.

Who Must Comply With California’s AI Transparency Act?

The AI Transparency Act applies to different parts of the digital ecosystem in stages. Companies that build generative systems are the first to adopt the new rules, followed by the platforms that distribute content and, finally, the manufacturers of devices that capture or edit media. The sections below outline who must comply and when each set of obligations begins.

1. Generative AI Providers

The first requirements begin on August 2, 2026, and apply to developers of generative AI systems with more than one million monthly users in California. These companies sit at the beginning of the content creation pipeline, which means they play a central role in introducing the mechanisms that help people understand when AI has influenced a photo, video, or audio clip.

For example, if an AI service allows users to create promotional images, it must ensure that its output includes the information needed for others to recognize that the file contains synthetic elements. This early step supports the later stages of the transparency process, where platforms and users can verify how the content was produced.

2. Large Online Platforms

The second stage begins on January 1, 2027, when large online platforms take on their responsibilities. This includes social networks, search engines, messaging services, and similar platforms with more than two million monthly users. While these companies do not generate the content themselves, they determine how it circulates and reaches the public.

Once the law applies to them, platforms must be able to detect when AI-generated material includes transparency signals and ensure that this information remains accessible as the content moves across their services. Removing or altering that information would no longer be permitted.

3. Capture-Device Manufacturers

The final phase begins on January 1, 2028, and applies to manufacturers of devices that capture or edit media. Any device first produced for sale in California after this date must give users the option to embed disclosure information in the photos, audio, or video they record.

This matters as consumer devices become more capable of producing media that can be mistaken for synthetic content. A new smartphone model, for instance, must support a feature that lets users attach a disclosure signal during recording, improving clarity for anyone who later encounters the file.

How AB 853’s Transparency Requirements Work

AB 853 sets out several requirements that help people recognize when AI has influenced digital content. The following sections outline the core mechanisms that make transparency possible.

1. Provenance Signals at Creation

Generative AI systems must embed latent disclosures into any synthetic or AI-altered content they produce. These machine-readable signals, such as metadata, digital signatures, or embedded watermarks, indicate how a file was generated or modified and are added at the moment of creation. They must remain attached whenever feasible, even as the file moves across services.

The law also permits manifest disclosures, which are visible or audible notices that clearly tell viewers when AI has contributed to a piece of content. Providers may use one or both methods, but latent disclosures form the technical foundation of the transparency framework.

2. Public Verification Tool

Generative AI providers must offer a free, publicly accessible tool that allows anyone to upload a piece of content and check whether the provider’s system created or altered it. The tool must also reveal any embedded provenance signals. This requirement gives the public a way to independently verify authenticity instead of relying solely on platform labels or company statements.

3. Platforms Must Preserve and Display Provenance

When content containing provenance signals is uploaded or shared, large online platforms must detect those signals and keep them intact. They cannot remove, obscure, or alter metadata, latent disclosures, or digital signatures.

Platforms also must present provenance information to users in a clear and accessible way whenever AI-generated or AI-altered content appears on their services. This ensures a consistent transparency experience, regardless of where users encounter the material.

4. Non-Compliant AI Systems Cannot Be Distributed

Model-hosting platforms and similar services cannot offer, distribute, or host generative AI systems that fail to embed the required provenance. The law also prohibits providing tools or services designed primarily to remove or defeat provenance information. These rules prevent synthetic content from shedding its transparency signals after creation.

5. Provenance Must Follow Recognized Technical Standards

To ensure disclosures can be read across different systems, AB 853 requires that provenance and labeling methods follow widely adopted or formally recognized technical standards. This approach prevents fragmentation and creates an ecosystem where content created by one system can be reliably interpreted by others.

The Policy Implications of California’s AI Transparency Act

AB 853 is likely to shape conversations beyond California because the state often sets early standards that others adopt. This pattern has appeared before. The California Consumer Privacy Act helped advance national discussions about data rights, and AB 853 may play a similar role in shaping expectations for transparency in AI-generated media.

The act supports the growing view that transparency should accompany consent and ownership protections. While other proposals focus on the rights of creators and public figures, AB 853 focuses on how information about synthetic content is communicated to the public. Together, these efforts point toward a more complete framework for governing generative tools.

Federal lawmakers and international regulators are watching these developments closely. Many are considering their own approaches, and California’s model offers a practical reference point. It shows how disclosure and provenance measures can be built into AI systems without limiting creative experimentation or access to new tools.

The law also reinforces an idea that is gaining support across the industry. Users should be able to understand how a piece of content was produced. They should not need technical expertise to determine whether AI played a role. By making transparency a standard expectation, AB 853 may encourage broader adoption of provenance technologies across platforms, devices, and generative systems.

Conclusion: Why Transparency Helps People Trust AI Content Again

AB 853 marks a shift in how California expects digital content to be explained and understood. Its focus on transparency gives people clearer insight into how a piece of media was created and whether AI played a role. When that information is easy to see, audiences can make more confident judgments about what they encounter online.

This clarity also benefits creators and the wider creative community. Many rely on the integrity of their voice, image, or artistic style, and transparent disclosure standards help protect that trust. For creators who use AI as part of their workflow, openness about their process can strengthen credibility rather than weaken it. Platforms and developers also gain a more consistent structure for presenting information about AI-generated content.

By encouraging clearer communication around how digital material is produced, California is aiming to rebuild trust at a time when authenticity is harder to assess. Strong transparency practices give audiences better context, offer creators more protection and flexibility, and provide a foundation for the next stage of AI-supported creativity.

Identity.com

Privacy-first identity verification for businesses and developers. Verify users securely—without contracts, minimums, or data collection risks.

Join the Identity Community

Download our App