Introduction
As this year wraps up, digital likeness has become a growing challenge for creators and public figures. Advances in generative AI have made it easier to create realistic images, videos, and audio that resemble real people, often without their knowledge or consent. At the same time, the systems meant to protect individuals from unauthorized AI generated content have struggled to keep pace.
What stood out over the past year was not a single failure, but a pattern. Existing processes for digital likeness protection rely heavily on manual reporting, fragmented platform rules, and slow review cycles. Visibility into where likeness based content appears online remains limited. Clear permissions for AI use of a person’s likeness are still rare. Together, these gaps have shifted how creators, platforms, and policymakers are thinking about control, accountability, and trust heading into the year ahead.
Below are seven ongoing issues shaping digital likeness today:
1. There is no way to see everything being created about you
Creators and public figures have limited visibility into how and where their likeness appears online. Content is created and shared across social platforms, messaging apps, forums, marketplaces, and AI tools, often in different formats and under different usernames. There is no single place to track that activity, and most platforms only surface content once it is brought to their attention.
In practice, discovery depends heavily on chance. Creators often learn about unauthorized content through fans, followers, or third parties rather than through platform notifications or proactive alerts. By the time a piece of content is discovered, it may already exist in multiple versions across several platforms, each requiring separate attention.
This lack of visibility makes oversight difficult at scale. Without reliable ways to monitor where and how their likeness appears, creators cannot assess the scope of use or misuse. It also limits their ability to respond consistently, prioritize risk, or understand where content is spreading. The result is an incomplete picture of a person’s online presence, based on what happens to be found rather than everything that exists.
2. No control or permissions system exists for AI use of your likeness
Most AI systems operate without a standardized way for individuals to define how their likeness can be used. Images, videos, and audio that are publicly available are often treated as usable by default, even when the person depicted has not given consent. This leaves creators with little ability to set boundaries around acceptable use.
Unlike other creative industries that rely on licensing and rights management, there is no widely adopted permissions layer for likeness in AI generated content. Creators cannot easily specify what types of use are allowed, whether commercial or noncommercial use is permitted, or which parties have approval. As a result, expectations around acceptable use are unclear from the start.
The absence of clear permissions creates uncertainty for everyone involved. Creators lack meaningful agency over how their likeness is used, developers operate without consistent guidance, and platforms are left to interpret intent on a case by case basis. Without a shared framework for consent and authorization, control remains reactive rather than deliberate.
3. Manual takedown processes are too slow
Most platforms still rely on manual reporting systems to remove AI generated content tied to impersonation or misuse of a person’s digital likeness. That process requires creators to locate the content themselves, submit individual complaints, and wait for review. Even short delays allow content to spread beyond its original source.
Guidance around DMCA enforcement shows that takedowns frequently take days to weeks, and in some cases months, depending on the platform and review workflow. Even on large platforms with established reporting mechanisms, timelines vary. Copyright claims may be reviewed within several business days, while privacy or impersonation complaints often take longer. During that period, content can remain accessible and continue circulating.
These delays reflect an infrastructure gap. Existing systems were built for traditional copyright disputes and post publication review, not for fast moving likeness misuse that spreads quickly across platforms. Manual takedowns struggle to keep pace with the speed at which AI generated content now travels.
4. Fans want to create content, but creators have no safe way to let them
Fan created content has long been part of online culture, and many creators support it when it is done respectfully and within clear boundaries. AI tools have expanded what fans can make, from images and videos to voice based content, often with positive intent. The issue is not creativity itself, but the lack of safeguards that allow creators to participate on their own terms.
Today, creators are faced with an all or nothing choice. They can allow broad, uncontrolled use of their likeness, or attempt to restrict it entirely through takedowns and enforcement. There are few practical options in between. Without defined permissions or usage rules, even well intentioned fan creations can create confusion, misrepresentation, or unintended risk.
This gap leaves both sides exposed. Fans lack clarity on what is acceptable, and creators are forced into reactive enforcement rather than structured participation. With the right guardrails in place, fan creativity and creator control do not have to be at odds. What is missing is a safe, consistent way to allow creative expression while preserving agency and accountability.
5. Public figures lose monetization opportunities
AI generated content often drives engagement, visibility, and commercial value, yet it frequently excludes the public figures it depicts. When others use a likeness without permission, creators lose potential revenue from licensing, royalties, or approved partnerships. Rather than taking part in how their image or voice is commercialized, they are left out of the process.
Morgan Freeman has pointed directly to this imbalance. Commenting on unauthorized AI uses of his likeness, he said, “I’m like any other actor: don’t mimic me with falseness. I don’t appreciate it, and I get paid for doing stuff like that. If you’re doing it without me, you’re robbing me.” His remarks highlight a core issue. AI generated content can substitute for paid work or licensed appearances without any mechanism for compensation.
Without structured ways to opt into AI use and share in the resulting value, monetization remains limited to traditional channels. Royalties, usage based payments, and ongoing revenue participation are largely absent from AI driven media. As demand for synthetic content grows, the lack of economic frameworks continues to push value away from the individuals whose likeness makes that content desirable in the first place.
6. Platforms do not provide enough protection or transparency
Many platforms offer reporting tools for impersonation or misuse of a person’s likeness, but the process often lacks clarity. Creators are rarely told how reports are reviewed, how long decisions take, or how to escalate urgent cases. Once a complaint is submitted, communication is limited and outcomes can feel unpredictable.
Actor Jamie Lee Curtis highlighted this gap in a public Instagram post after discovering an AI generated commercial that used her likeness without authorization. She wrote, “I have gone through every proper channel to ask you and your team to take down this totally AI fake commercial… I tried to DM you and slide on in, but you don’t follow me so I’ve had to take to the public instaverse to try to reach you.” Curtis explained that public escalation felt like the only remaining option after standard reporting paths failed to produce a response.
Her post captured a broader issue. When formal systems offer little visibility or timely resolution, accountability shifts toward public pressure rather than clear process. For most creators, that route is not realistic or accessible. There is no defined escalation path, no consistent feedback loop, and no transparency into how similar cases are handled.
7. There is no reliable way to verify what is real
As AI generated media becomes more convincing, it is harder to tell whether a piece of content reflects something that actually happened. Images, videos, and audio can look credible without showing how they were created, edited, or approved. In many cases, audiences rely on context or assumption rather than clear signals.
Platforms do not offer consistent ways to signal authenticity. Labels and disclosures are applied unevenly and are often easy to miss or misunderstand. Some technical approaches, such as content provenance frameworks or watermarking, aim to address this problem by attaching metadata or markers at the point of creation. These tools can help indicate whether content was generated by AI or altered after the fact. However, adoption remains uneven, and many of these signals can be stripped, ignored, or lost as content moves across platforms.
Without widely recognized standards for provenance and verification, real and synthetic content continue to blend together. This creates uncertainty not only for audiences, but also for creators whose real statements or appearances can be mistaken for fabricated ones. Until verification signals are durable, portable, and consistently surfaced, determining what is real will remain a challenge across the online ecosystem.
Conclusion
Taken together, these challenges point to a shared reality. Systems built for earlier forms of online content are being stretched beyond their limits. Manual processes, limited visibility, unclear permissions, and the absence of verification all place the burden on individuals rather than on the infrastructure surrounding digital likeness.
What creators and public figures are asking for is not restriction, but clarity. They want to participate in creative spaces without losing control, to support innovation without sacrificing trust, and to engage audiences without constant uncertainty. Addressing these gaps will require tools and standards that recognize likeness as something that can be respected, managed, and verified at scale.
As conversations continue into the year ahead, the focus is shifting toward building safeguards that work before harm occurs. Clear permissions, transparent processes, and visible signals of legitimacy will play a central role in shaping how digital likeness is handled in a more accountable and sustainable way.