Thoughts on the No AI FRAUD Act

The acronym stands for No Artificial Intelligence Fake Replicas and Unauthorized Duplication. Introduced as a discussion draft by Rep. Maria Salazar et al., the No AI FRAUD Act would create a novel form of intellectual property in direct response to the use of AI to “clone” a likeness. With parallels to right of publicity (ROP) law, combined with a copyright-like, transferable ownership of rights, the No FRAUD bill is sweeping as currently proposed, citing a range of conduct, from deepfakes to create and distribute nonconsensual intimate material, to cloning an actor or singer’s voice for commercial exploitation.

In short, the law would prohibit replication of anyone’s likeness without permission, and then, the purpose of the unlicensed replication would determine the nature of the harm and available remedies. Although the intent of this bill is well-founded in addressing certain harms to individuals like performing artists, the bill’s current scope, combining permission and intent, and seeking to remedy a broad range of potential harms, raises some difficulties.

Permission vs. Intent

As discussed on this blog, Cyber Civil Rights Initiative (CCRI) leaders, Danielle Citron and Mary Anne Franks, have advocated a permission-based, rather than an intent-based cause of action for the nonconsensual distribution of intimate material, commonly referred to as “revenge porn.”[1] The CCRI has worked hard to demonstrate that merely distributing this material without permission is criminal, regardless of the intent to cause harm, and this makes sense in response to the nature of the conduct. But advancement in AI replication presents a unique challenge to the principle that permission is universally the signal event triggering liability.

No question that the guy who shares intimate material of an ex, a girl at school, a work colleague, etc. should be held accountable solely on the basis that he lacked permission, and this is valid whether the visual material is real (i.e., photographic) or synthetic (i.e., produced with an AI). First Amendment defenses for this type of conduct have reasonably failed when various parties challenged the constitutionality of several of the “revenge porn” laws, now in force in 48 states. The permission principle in harassment-based complaints should not be disturbed by the No FRAUD Act, and Congress should likely avoid any temptation to combine the intent of this bill with current or developing federal prohibitions for “revenge porn.”

But the use of AI to replicate a likeness cannot so broadly be proscribed for all purposes. As the Motion Picture Association notes in its response to the bill, “… any legislation must protect the ability of the MPA’s members and other creators to use digital replicas in contexts that are fully protected by the First Amendment.” Notwithstanding contractual conflicts that may arise in the future among performers and producers, the MPA is right to note that AI cloning for expressive purposes that constitute protected speech should not be swept into the scope of legislation like the No FRAUD Act.

The example I often use with friends and colleagues is the movie or TV series that casts a public figure (let’s call him Donald Trump) in a light he might not appreciate. Expressive portrayals—factual, dramatic, or sardonic—of public figures are paradigmatic forms of protected speech, and this principle should not be altered by vesting new IP rights in persons, premised solely on the use of AI models to achieve the same expressive results historically created with old-school “movie magic.” In other words, Trump should no more be empowered to enjoin the use of his AI likeness to comment upon his role in society than he would have been allowed to stop Saturday Night Live from producing the sketches featuring Alec Baldwin.

Vesting new “likeness IP” rights in all persons is a reasonable response to the potential harms—both financial and reputational—that may be caused to millions of creative professionals and ordinary citizens. But these goals must allow for expressive uses of AI replication, adhering to longstanding contours protecting the speech right and controlling limits like libel and defamation.

In another example, imagine a documentary about the events of January 6th that includes reenactments based on witness testimony describing the actions of the former president during the attack on the Capitol. The documentary producer’s legal responsibility to balance faithful reportage with reasonable expressive license should not be altered solely on the basis that the film may use generated AI likenesses of Trump, Meadows, Hutchinson, Ivanka, et al. rather than actors to produce the same scene.

With a documentary film, one can imagine a legal requirement to inform the viewer that what they are seeing is an AI-generated reenactment (rather than, say, someone’s cellphone recording), but no such requirement should apply to a non-documentary audiovisual work. In either case, misinformation is already thriving in a dangerously blurry space between fact and fiction and a decline in media literacy fostered by the ability of any individual to distribute any fragment of material without context on a public platform. In other words, the documentarian can do her job right, but she cannot stop every potential bad actor from taking a segment of that reenactment and publishing it in a manner that changes its context and feeds a false narrative. (Thank you to all those who celebrated “remix culture” as a rejection of copyright law.)

AI Generated Likeness and the Misinformation Problem

Regarding the documentary example, the preamble of the No FRAUD working draft cites the use of unauthorized likenesses for the purpose of disinforming the public about matters of a factual or newsworthy nature. And while this is indeed a problem that AI tools will be used to exacerbate, it is a challenge that should be addressed separately from the intent and sweep of the No FRAUD proposal. Congress must recognize that the capacity to cause widespread, societal harm through disinformation by means of AI likeness replication is too hazardous and too rampant to remedy on a case-by-case, civil-liability basis. And that’s even if the producer of the fake is operating within the reach of U.S. law rather than, say, China or Russia.

Further, there is a legal tension created by comparing and contrasting the entertainment satirist with the news provocateur who trades in misinformation, as we see in the claims of slander against Tucker Carlson of FOX News in 2020. Arguing that “no reasonable person” would truly believe everything Carlson says, Fox’s attorneys successfully defended the network against any cause of action, and while this may be a reasonable finding based on the facts presented, it is one of many examples in which the lines separating opinion, criticism, satire, and information have been blurred beyond relevance vis-à-vis public perception. Now add the ability to cheaply recreate anyone’s likeness with sophisticated AI, and how far can a “news” organization push the line under the same protections that apply to the satirical filmmaker or The Daily Show?

Of course, my references here to Trump and Carlson allude to a much bigger, underlying problem—namely that Congress is not going to effectively address the use of AI likeness for misinformation unless Members on both sides can agree to mutually define fact and fiction. Not to say that Dems never cling to narratives built on some rather shaky foundations, only that it’s hard to compete with the existential lies of whatever the hell the GOP has become in the thrall of Trumpism. That and no American political figure has ever proven to be so thin-skinned in response to criticism.

For the moment, my own view is that a bill like No FRAUD should be narrowly tailored to vest new “likeness IP” in persons to proscribe compelled speech and commercial exploitation that meets standards akin to unfair competition. Further, because such uses require a court to weigh the intent of likeness replication, this new right should not preempt or alter anti- “revenge porn” legislation, where lack of permission must remain the sole cause of action. While I see the potential of this bill to protect various artists and non-artists with novel rights against novel harms, difficulties like those addressed in this post must help define the contours of those new rights.


[1] “Revenge porn” is a problematic term because it implies intent to harm, which is anathema to the principle that lack of consent is the cause of action.

Image by: meyerandmeyer

Is Site Blocking Finally Within Sight?

With all the talk about AI, one might think the problem of old-school media piracy has abated, but this week, the House Judiciary Committee held a hearing entitled Digital Copyright Piracy:  Protecting American Consumers, Workers, and Creators. Although much of the conversation was familiar territory (i.e., the economic value of the creative industries and the cost of piracy), the legislative question in the room was whether the United States will finally adopt site blocking provisions as many other nations have done. In her testimony, Motion Picture Association (MPA) general counsel Karyn Temple stated:

…over the past decade, more than 40 countries, including leading democracies such as the U.K., much of Western Europe, Canada, Australia, India, Brazil, South Korea, and Israel, have enacted no-fault injunctive relief regimes that expressly authorize courts or administrative agencies to issue orders directing internet service providers (“ISPs”) and other online intermediaries to disable access to websites dedicated to piracy. Pursuant to these laws, courts and administrative agencies have disabled access to more than 90,000 domains used by over 27,000 websites engaged in blatant piracy after affording full due process.

“No-fault injunctive relief” and “full due process” is key language to keep in mind as Congress re-opens this discussion and the self-appointed defenders of the internet respond like Sauron’s orcs to the battle cry. After all, things got a bit heated “twelve years ago,” as noted by Rep. Zoe Lofgren in reference to the SOPA/PIPA legislation that was doomed by an extraordinary disinformation and fear-mongering campaign coordinated and funded by the internet industry. And although that story ought to be old news, the testimony of Matt Schruers, president of the Computer and Communications Industry Association (CCIA), rang the “Stop-SOPA” bell with statements like the following:

Content filtering by automation is not always effective or accurate. In particular, “off-the-shelf” filtering technologies tend to be focused only on specific classes of works, and cannot necessarily provide meaningful protection to content on sites whose users can create many different types of works. Automated tools are also unable to take into account context or nuance of individual uses, so may result in over-removal of non-infringing, fair uses. These false positives merit particular attention because any unjustified content filtering or takedown may suppress lawful expression.

That commentary is dog-whistling because it has nothing to do with the purpose of, or mechanisms inherent to, site blocking. Schruers is referring to imperfections in the DMCA notice-and-takedown provisions, exaggerating its effects on protected speech, and eliding the fact that a distinguishing aspect of a site blocking provision is that it requires a party to present evidence to obtain a court order and provides ample opportunity for both service providers and the allegedly infringing website to rebut the evidence. No party would be empowered to “automate” site blocking the way that, for instance, copyright owners can automate DMCA takedown notices.

Homing in on Schruers’s rhetoric, the highlight of the hearing was arguably Rep. Ted Liu, who used his phone to access the pirate site F Movies, which he confirmed with Ms. Temple cannot be accessed in most of Europe. Emphasizing the fact that the F Movies site has been available to Americans since 2016, Liu stated, “We’re trying to be reasonable here. This is such an unreasonable case. This is so clearly online piracy, copyright infringement, and you don’t want your organization, your members, defending something so blatantly unlawful and unreasonable. I just ask your members to block that site today.”

In response, Schruers first noted that the broadband providers were not testifying, but Liu pressed on, “You cannot defend this. This is not defensible.” Schruers stated that his members are also content creators, that piracy is a shared concern with other content creators, and then reiterated the argument that the best remedy to piracy is more widespread, legal, availability of more content.

This rhetoric, dating back to NAPSTER (1999), has not aged well in a time when, if anything, consumers often feel that there are too many channels requiring too many subscriptions. But that is a business narrative still evolving in the streaming market, and not one that justifies access to pirate sites. More to the point, the “more access” argument completely ignores the myriad reasons to finally adopt site blocking, even if the harm to content creators were minimal. 

For instance, Rep. Lofgren resurfaced the prospect of prohibiting payment processors (i.e., credit card companies) from doing business with the pirate sites, but as film producer Richard Gladstein noted, the pirate’s revenue is not derived solely, if at all, from traditional credit card transactions. Although Mr. Gladstein did not go into much detail, he did mention the use of cryptocurrency in illegal trade of this nature, and Rep. Lofgren failed to note that voluntary initiatives between copyright owners and payment processor companies to prevent known infringing sites from accessing payment networks have existed for years and only do so much to stifle piracy.

Moreover, as reported on this blog in several posts, Digital Citizens Alliance has provided extensive reports on the complex, malware-based, dark web market for which pirated media is merely used as bait. Thus, even if not a single professional in media production were financially harmed by piracy, the use of media piracy as a conduit to more dangerous forms of cybercrime is reason alone for Congress to finally block these sites from access to the U.S. market.

Of course, piracy is a threat to not only creators, but everyone involved in bringing entertainment, including live broadcasts of sporting events, to fans. As described Riché McKnight, general counsel for the Ultimate Fighting Championship, “UFC estimates that within hours of a single UFC event, hundreds of thousands of viewers may have already seen infringing versions of the event…UFC further estimates that due to piracy, multiple millions of dollars are diverted from legitimate purchases of UFC content each year,” McKnight states in his written testimony.

McNight’s testimony also highlights a major problem with the DMCA — that while it calls for service providers to take down infringing content “expeditiously,” there is no clear definition of that term. This is extremely problematic for industries broadcasting live sporting events, where the value of the broadcast may last minutes or seconds and then diminish greatly once the event concludes.

What About Felony Streaming?

In 2020, against the objections of the usual anti-copyright parties, the Protect Lawful Streaming Act was passed, which made enterprise-scale piracy by means of streaming a felony rather than a misdemeanor. The question as to how effectively the Justice Department has used this provision was raised in the hearing, perhaps as a distraction from site blocking, but there are at least two answers to why PLSA is not a complete remedy for piracy. One is of course the resources of the DOJ, and the other is that site blocking provisions exist to prevent access to the domestic market by sites operating outside U.S. jurisdiction.

As Chairman Darrell Issa noted at the end of the hearing U.S. Customs and the International Trade Commission are empowered to stop the importation of physical goods that violate intellectual property law. As such, he asks, “Today, aren’t we just talking about finding the equivalent of what for two-hundred plus years, our Customs and other agencies have done when there is due process and entities such as Article III courts have reached a decision, the execution of that protection is done by our government, or on behalf of our government, by orders to those who participate in brining things into the United States?”

Perhaps not the most concisely worded question, but it is exactly right. The U.S. bars illegal goods from overseas from entering the country, and there is no threat to constitutional principles for doing likewise when the means of “importation” is digital transmission. Moreover, as stated here many times, an infringing digital transmission of a work can cause immensely more damage than even thousands of physical bootlegs. Assuming the HJC proceeds toward site blocking legislation, I imagine we’ll hear some SOPA-like noise begin to rumble online. But based on my read of that hearing and the market overall, I wouldn’t expect that noise to make much difference this time.

AI “Training” Still an Open Copyright Question

On October 30, Judge Orrick of the Northern District of California largely granted the AI companies’ motions to dismiss the class-action complaints filed by Sarah Andersen, Karla Ortiz, and Kelly McKernan on behalf of all visual artists whose works have been used without permission for the purpose of “training” generative AI models. Several complaints were dismissed with leave to amend, but without detailing every allegation, dismissal, and possible cure, a few points are noteworthy for creators watching these developments with understandable anxiety.

First, the dismissals are not surprising because several of the complaints were not well founded in law. For instance, as discussed in other posts, the claim that all outputs of the AI systems are unlicensed “derivative works” of the works ingested is a football bat[1] of an argument. “I am not convinced that copyright claims based a derivative theory can survive absent ‘substantial similarity’ type allegations,” states Judge Orrick. One may be hard-pressed to find a copyright advocate who would disagree with that statement because a “derivative work” must share some protectable elements derived from the originally protected work.

Also, of note as both a matter of civil procedure and enforcing one’s rights in general, the copyright allegations by plaintiffs McKernan and Ortiz were dismissed with prejudice[2] for the simple reason that neither artist named works in suit that were registered with the U.S. Copyright Office. Although a class-action copyright suit can be filed on behalf of “all artists,” who created works that will not be registered, the named plaintiff(s) must allege infringement of registered works identified in the complaint. Timely registration (generally before the infringement occurs) is a prerequisite to filing a lawsuit in federal court.

On a more positive note, the court did not dismiss Andersen’s allegation of direct copyright infringement by Stability AI. Here, Judge Orrick finds that the complainant reasonably alleges that illegal copying occurs as part of Stability’s “training” process and, therefore, triable issues of fact are presented which cannot be dismissed at this stage. As indicated in older posts about these cases, this question—namely infringement of the “reproduction” right §106(1)—will likely be the most illuminating for both creators and AI developers as to where the legal boundaries lie when it comes to “training” with protected works.

On a related note, I was reviewing the comments submitted by the Computer & Communications Industry Association (CCIA) to the Copyright Office NOI on artificial intelligence. Although I do not disagree with every conclusion (e.g., on copyrightability of AI-generated works), CCIA is so dead certain that “training” with protected works is fair use that it states, “No one should have the ‘right’ to object to an AI model being trained on their work.” Of course, this overstatement was the first sentence in an answer to an odd question by the Office, which asks the following:

9.5. In cases where the human creator does not own the copyright—for example, because they have assigned it or because the work was made for hire—should they have a right to object to an AI model being trained on their work? If so, how would such a system work?

 I don’t understand the intent of this question. A work in copyright is protected until its term of protection expires. The rights attached to that work may be infringed at any point during the term of protection, and it makes no difference whether the rights are owned by an entity under the work made for hire doctrine or if the rights have been transferred by agreement, inheritance, sale, etc. The question of whether AI “training” constitutes infringement is in no way affected by the status or nature of the copyright owner of the work(s) used.

Unfortunately, this question provided the CCIA with a basis to respond thus:

If a right to object to the use of a work for hire existed, it would belong to the employer. However, given the volume of copyrighted works owned by large employers, allowing employers to take this type of action would exclude large swaths of data that would aid in technological progress and the quality of AI systems and create significant barriers to entry for small entities wishing to develop new AI technologies.

The “right to object” to the use of works in AI “training” may be decided in instances like the surviving claim in Andersen. Meanwhile, CCIA’s broader argument appears to be that the potential cost of doing business should inform the threshold question of copyright infringement. No doubt, AI developers would like unlimited access to free materials, but this “don’t stop the innovation” argument is not a legal question; it is a hackneyed retread of the utopian claim that copyright enforcement online will stifle the “free flow of information.”

Well, whatever is freely flowin’ out there, I wouldn’t necessarily call it information, and against that background, I see no reason to give AI developers carte blanche to exploit creators (again) for the sake of innovation that may not be progress.


[1] Football bat is a military expression for an improvised, cobbled-together tool.

[2] i.e., The complaint cannot be amended and refiled.