Thoughts on the No AI FRAUD Act

The acronym stands for No Artificial Intelligence Fake Replicas and Unauthorized Duplication. Introduced as a discussion draft by Rep. Maria Salazar et al., the No AI FRAUD Act would create a novel form of intellectual property in direct response to the use of AI to “clone” a likeness. With parallels to right of publicity (ROP) law, combined with a copyright-like, transferable ownership of rights, the No FRAUD bill is sweeping as currently proposed, citing a range of conduct, from deepfakes to create and distribute nonconsensual intimate material, to cloning an actor or singer’s voice for commercial exploitation.

In short, the law would prohibit replication of anyone’s likeness without permission, and then, the purpose of the unlicensed replication would determine the nature of the harm and available remedies. Although the intent of this bill is well-founded in addressing certain harms to individuals like performing artists, the bill’s current scope, combining permission and intent, and seeking to remedy a broad range of potential harms, raises some difficulties.

Permission vs. Intent

As discussed on this blog, Cyber Civil Rights Initiative (CCRI) leaders, Danielle Citron and Mary Anne Franks, have advocated a permission-based, rather than an intent-based cause of action for the nonconsensual distribution of intimate material, commonly referred to as “revenge porn.”[1] The CCRI has worked hard to demonstrate that merely distributing this material without permission is criminal, regardless of the intent to cause harm, and this makes sense in response to the nature of the conduct. But advancement in AI replication presents a unique challenge to the principle that permission is universally the signal event triggering liability.

No question that the guy who shares intimate material of an ex, a girl at school, a work colleague, etc. should be held accountable solely on the basis that he lacked permission, and this is valid whether the visual material is real (i.e., photographic) or synthetic (i.e., produced with an AI). First Amendment defenses for this type of conduct have reasonably failed when various parties challenged the constitutionality of several of the “revenge porn” laws, now in force in 48 states. The permission principle in harassment-based complaints should not be disturbed by the No FRAUD Act, and Congress should likely avoid any temptation to combine the intent of this bill with current or developing federal prohibitions for “revenge porn.”

But the use of AI to replicate a likeness cannot so broadly be proscribed for all purposes. As the Motion Picture Association notes in its response to the bill, “… any legislation must protect the ability of the MPA’s members and other creators to use digital replicas in contexts that are fully protected by the First Amendment.” Notwithstanding contractual conflicts that may arise in the future among performers and producers, the MPA is right to note that AI cloning for expressive purposes that constitute protected speech should not be swept into the scope of legislation like the No FRAUD Act.

The example I often use with friends and colleagues is the movie or TV series that casts a public figure (let’s call him Donald Trump) in a light he might not appreciate. Expressive portrayals—factual, dramatic, or sardonic—of public figures are paradigmatic forms of protected speech, and this principle should not be altered by vesting new IP rights in persons, premised solely on the use of AI models to achieve the same expressive results historically created with old-school “movie magic.” In other words, Trump should no more be empowered to enjoin the use of his AI likeness to comment upon his role in society than he would have been allowed to stop Saturday Night Live from producing the sketches featuring Alec Baldwin.

Vesting new “likeness IP” rights in all persons is a reasonable response to the potential harms—both financial and reputational—that may be caused to millions of creative professionals and ordinary citizens. But these goals must allow for expressive uses of AI replication, adhering to longstanding contours protecting the speech right and controlling limits like libel and defamation.

In another example, imagine a documentary about the events of January 6th that includes reenactments based on witness testimony describing the actions of the former president during the attack on the Capitol. The documentary producer’s legal responsibility to balance faithful reportage with reasonable expressive license should not be altered solely on the basis that the film may use generated AI likenesses of Trump, Meadows, Hutchinson, Ivanka, et al. rather than actors to produce the same scene.

With a documentary film, one can imagine a legal requirement to inform the viewer that what they are seeing is an AI-generated reenactment (rather than, say, someone’s cellphone recording), but no such requirement should apply to a non-documentary audiovisual work. In either case, misinformation is already thriving in a dangerously blurry space between fact and fiction and a decline in media literacy fostered by the ability of any individual to distribute any fragment of material without context on a public platform. In other words, the documentarian can do her job right, but she cannot stop every potential bad actor from taking a segment of that reenactment and publishing it in a manner that changes its context and feeds a false narrative. (Thank you to all those who celebrated “remix culture” as a rejection of copyright law.)

AI Generated Likeness and the Misinformation Problem

Regarding the documentary example, the preamble of the No FRAUD working draft cites the use of unauthorized likenesses for the purpose of disinforming the public about matters of a factual or newsworthy nature. And while this is indeed a problem that AI tools will be used to exacerbate, it is a challenge that should be addressed separately from the intent and sweep of the No FRAUD proposal. Congress must recognize that the capacity to cause widespread, societal harm through disinformation by means of AI likeness replication is too hazardous and too rampant to remedy on a case-by-case, civil-liability basis. And that’s even if the producer of the fake is operating within the reach of U.S. law rather than, say, China or Russia.

Further, there is a legal tension created by comparing and contrasting the entertainment satirist with the news provocateur who trades in misinformation, as we see in the claims of slander against Tucker Carlson of FOX News in 2020. Arguing that “no reasonable person” would truly believe everything Carlson says, Fox’s attorneys successfully defended the network against any cause of action, and while this may be a reasonable finding based on the facts presented, it is one of many examples in which the lines separating opinion, criticism, satire, and information have been blurred beyond relevance vis-à-vis public perception. Now add the ability to cheaply recreate anyone’s likeness with sophisticated AI, and how far can a “news” organization push the line under the same protections that apply to the satirical filmmaker or The Daily Show?

Of course, my references here to Trump and Carlson allude to a much bigger, underlying problem—namely that Congress is not going to effectively address the use of AI likeness for misinformation unless Members on both sides can agree to mutually define fact and fiction. Not to say that Dems never cling to narratives built on some rather shaky foundations, only that it’s hard to compete with the existential lies of whatever the hell the GOP has become in the thrall of Trumpism. That and no American political figure has ever proven to be so thin-skinned in response to criticism.

For the moment, my own view is that a bill like No FRAUD should be narrowly tailored to vest new “likeness IP” in persons to proscribe compelled speech and commercial exploitation that meets standards akin to unfair competition. Further, because such uses require a court to weigh the intent of likeness replication, this new right should not preempt or alter anti- “revenge porn” legislation, where lack of permission must remain the sole cause of action. While I see the potential of this bill to protect various artists and non-artists with novel rights against novel harms, difficulties like those addressed in this post must help define the contours of those new rights.


[1] “Revenge porn” is a problematic term because it implies intent to harm, which is anathema to the principle that lack of consent is the cause of action.

Image by: meyerandmeyer

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)