The NO FAKES Act Returns to Congress With Industry Support: What It Could Mean for Digital Protection and AI-Generated Content
April 9, 2025
Johana Gutierrez
Please share on

Today, the NO FAKES Act was reintroduced in both chambers of Congress for the 2025 legislative session—this time with a powerful coalition of bipartisan sponsors and support from major industry players including YouTube and OpenAI.

The legislation aims to establish clear legal protections against the unauthorized use of an individual’s likeness and voice through AI-generated media. As synthetic content becomes more realistic and widely distributed, the return of the NO FAKES Act reflects growing urgency in addressing the legal and ethical challenges surrounding digital identity and generative technology.

A Win for Identity Rights

The NO FAKES Act, short for Nurture Originals, Foster Art, and Keep Entertainment Safe, proposes a new federal right for individuals to control the commercial use of their voice and likeness. This right would extend for 70 years after a person’s death, giving estates long-term authority over their digital identity.

If passed, it would prohibit the creation or monetization of AI-generated replicas of someone’s identity without their consent. In practice, this would hold AI-generated likenesses and voice clones used in advertising, entertainment, or influencer content to a new legal standard grounded in permission and transparency.

Accountability for Misuse

The bill introduces legal liability for individuals or entities that create non-consensual synthetic media intended to deceive, impersonate, manipulate, or exploit others for personal, political, or commercial gain. It is designed to address misuse before this content can inflict reputational, psychological, or financial harm.

“The introduction of the NO FAKES Act marks a turning point in AI accountability,” said Luke Arrigoni, CEO and Founder of Loti AI. “Loti AI has been at the forefront of enforcement challenges, and this bill would give us the takedown authority we’ve needed to combat harmful AI-generated content. Protecting individuals from unauthorized replicas isn’t just a legal issue—it’s about restoring trust and dignity in the digital age.”

The proposed legislation also includes a DMCA-style safe harbor for platforms.  Platforms that act promptly to remove infringing content after notification would not be held liable, encouraging timely response without overburdening content hosts.

Balancing Innovation and Rights

The bill makes clear what it does not restrict. It includes protections for First Amendment use cases such as parody, satire, news reporting, and commentary—ensuring that the right to speak freely is preserved while reinforcing the right to not be misrepresented or exploited.

By prioritizing consent over censorship, the NO FAKES Act outlines an ethical and enforceable model for governing AI-generated likeness and biometric data.

Why This Time May Be Different

Originally introduced in 2024, the bill gained early bipartisan attention. But its 2025 return comes with renewed momentum as public pressure has grown, AI-generated impersonation cases have escalated, and now, leading technology companies have publicly backed the measure.

This combination of political will and industry alignment could make the difference in pushing the bill through to enactment.

What This Means Going Forward

Protecting digital identity in the age of generative AI requires sustained collaboration among platforms, policymakers, and the organizations directly engaged in detection, enforcement, and response. While the NO FAKES Act has only just been introduced, it has the potential to fundamentally reshape how the United States addresses identity misuse in digital environments.

If enacted, this legislation would provide individuals—and the organizations that support them—with the legal authority needed to act against unauthorized synthetic content. It would fill a long-standing gap in digital protections and create a framework for proactive enforcement.

At Loti AI, we’ve spent years helping individuals navigate the harms of deepfakes, impersonation, and non-consensual content. This bill represents a meaningful opportunity to strengthen that work and to offer the people we support greater agency over how their identity is used online.

We strongly support the NO FAKES Act and are encouraged to see growing consensus across government and industry. This moment is about more than AI regulation. It’s about reestablishing consent, trust, and dignity in the digital age.

Related Articles
All Articles