It Looks Like the EFF is Pro FAKES

FAKES

When it comes to cyber policy and anything like intellectual property, the Electronic Frontier Foundation’s critiques are so predictable, they might as well use ChatGPT to write their blog. For instance, in opposing the NO FAKES Act, an April post by Corynne McSherry selects items from that same menu of responses EFF has used to oppose any form of online copyright enforcement. In this instance, she orders up the following:  pretend to want a “better” bill; cite scary hypotheticals; pretend to care about creators; and, of course, insist that the speech right is in jeopardy.

For review, the NO FAKES Act would establish a new property right in every individual’s likeness, including one’s voice. As opined on this blog, its mechanisms comprise a thoughtful response to a novel challenge—namely the ability of just about any party to use generative artificial intelligence (GAI) to replicate the likeness of any person. The hazards of replication are obvious to the common-sense observer—from intensifying disinformation to commercial uses without permission to sexual predation, scams, and harassment. But as usual, the EFF advocates the interests of the tech industry by framing its critiques in a rhetoric that sounds pro-individual or even (ha!) pro-artist.

McSherry’s broadside at NO FAKES employs the tactic of alluding to hypothetical negative consequences, which Congress has (of course) failed to consider. Thus, EFF insists, as it did with bills like the CASE Act, that NO FAKES, as written, should be balled up, and that Congress should start over from scratch. But those of us familiar with the organization recognize that this pretense is there to mask the EFF’s view that the whole idea of a likeness right should be scuttled. If past is prologue, the EFF will never endorse any version of a law to remedy unlicensed AI likeness replication and, possibly, never engage as a good-faith negotiator on the subject.

Predictably, McSherry’s post elides important details about NO FAKES. I won’t unpack them all, but in one example, she writes, “The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies.” Of course, she doesn’t mention that although the 70-year term is the maximum, the likeness right would have to be renewed post-mortem and, similar to trademarks, renewal is conditioned on showing that the likeness is still in “authorized public use.”

But it was another paragraph that struck me as vintage EFF—an implication that existing right of publicity (ROP) laws in the states are already harmful to speech and that NO FAKES can only make matters worse. McSherry writes:

… it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songsmagazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died.

But following the links in that first paragraph, one finds a couple of unmeritorious claims of ROP along with a couple of EFF’s opinions about how ROP law should be applied in context to the speech right. In the first instance, weak cases that do not prevail only disprove the allegation that a law has “reached beyond” its intent. And in the second instance, while the EFF is entitled to its opinion, its interpretation of the speech right is so expansive that it is unremarkable when the courts so often disagree with their positions.

In fact, the EFF’s overbroad concept of the speech right is one reason I say it is being disingenuous in asking for a revised likeness bill. NO FAKES arguably provides better guidance on the use of AI replicas for protected speech than ROP case law, but although McSherry acknowledges these provisions, she states, “…interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.” That’s code for “let’s not have anything like this law” because of course all laws need to be interpreted and, yeah, lawyers are usually involved.

Another hypothetical includes the familiar, and laughable, implication that the EFF cares about creators and performers…

People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image.

While it is true that an individual could over-license the use of her likeness to another party, this is no different than licensing traditional forms of intellectual property. That an owner might give away too much is a consideration of the owner’s savvy and legal representation, but not a rationale to oppose the right being established in the first place. This complaint is a rehash of the fallacy that copyright rights are bad because some parties have cajoled artists into signing over more than they should. As applied to NO FAKES, I suspect people will favor the right to control their own likenesses and then worry about licensing, if that becomes an issue.

Finally, McSherry’s post repeats the same old prediction that NO FAKES will lead to platforms removing some undefinable, yet unacceptable, volume of protected speech. It’s almost surprising that EFF remains committed to this message when social media is clearly overflowing with so much protected hogwash—and when the major platforms are increasingly taking down innocuous posts without any rationale or transparency. A casual review of the current state of “information” on social platforms can only support the rational prediction that AI generated likenesses will exacerbate the problem. At the same time, EFF’s claim to defend “new creavity” is overstated when even protected uses of AI likenesses are often little more than brief diversions of limited cultural, and no informational, value.

For every potentially legal use of AI likeness, there are dozens of ways for scammers, foreign adversaries, predators, and unscrupulous business operators to use the technology to cause serious harm. But, true to form, the EFF asks that we ignore evidence of the damage being done and imagine instead that any remedy must be worse than the disease. Just off the cuff, they’ve used similar tactics to be wrong about CASE Act, Section 1201, Section 230, site-blocking, and Controlled Digital Lending. So, it is hardly a bold speculation to say that they’re wrong about NO FAKES.


Image source by: maxxyustas

No FAKES Act Introduced:  A Big Deal for Performing Artists and Everyone Else

No FAKES

Ever since the generative artificial intelligence (GAI) controversy began heating up, I’ve had several conversations with friends and colleagues who are voice actors and have had to disappoint them by repeating the fact that copyright law does not protect a person’s “likeness,” which includes one’s voice. And I’ve had similar conversations with colleagues focused on replication of likeness for the production of nonconsensual pornography. Nevertheless, the instinct makes sense—that the same human-centric principles that protect “authorship” might apply to the human’s likeness as well. Now, that basic sense of justice is articulated in a new bill introduced in the Senate.

Historically, the protection of likeness has been the subject of a relatively narrow area of law called the right of publicity (ROP), a common-law right with statutory provisions in 25 states—and narrow because ROP typically applies to the unauthorized use of celebrity likeness for commercial advertising purposes. But with the introduction of the No FAKES Act, Congress proposes to substantially change the protection of individual likeness in direct response to the capacity of GAI to conjure just about anything from fake news to fake performances by actors and musicians.

Introduced by Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC), the acronym stands for Nurture Originals, Foster Art, and Keep Entertainment Safe Act. The heart of the bill establishes a property right in the likeness of any person, living or dead, and prohibits digital replication without permission. Similar to copyright rights, the “digital replication right” is vested in every individual regardless of whether one commercially exploits one’s own likeness, and the right is licensable and transferrable to heirs and assigns after death. Post-mortem rights would last 10 years but may be extended through a renewal and registration process administered by the U.S. Copyright Office if the right holder can show active and authorized public use of the voice or visual likeness.

The bill anticipates legitimate creative and newsworthy uses of unlicensed replication and exempts a broad range of uses for purposes like news, documentary, parody, etc. For a purpose to be “newsworthy,” the replicated individual must be the subject of the material created—e.g., a story about Hugh Jackman, not merely a replication of him “cast” for free in your film or commercial. Further, the bill explicitly states that creating a false impression that a given replication is an “authentic” recording of the individual will still trigger liability under the new law. Thus, the documentarian who uses a replication in a scene that looks like real surveillance or cellphone footage will probably need to identify that material as AI generated to avoid liability.

Remedies for violation of the digital replication right range from damage awards of $5,000 per depiction made by individuals or by online providers; and $25,000 per depiction by corporate entities other than online providers. Plaintiffs may also seek actual damages and attorney fees, and courts may award punitive damages where unlawful replications entail malice, fraud, or willful ignorance that the use violated the law.

Finally, taking a page from the Copyright Act, No FAKES contains a DMCA-like takedown provision for removal of content alleged to be an unlawful replication, and this provision includes maintenance by the Copyright Office of a database of “agents” to whom such complaints must be submitted. Likewise, familiar safe harbor provisions apply to both product developers and platforms that may, without the knowledge of these providers, be used to produce or distribute unlicensed replications.

Given Silicon Valley’s poor record for compliance with the DMCA for copyright owners, the takedown provisions in No FAKES naturally raises questions about everyday removal of material, which is often the first, if not the main, remedy non-performers will care about. Regardless, from my perspective, the bill both recognizes a wide range of abuses of GAI replication and exempts or limits liability for an appropriate range of legitimate, First Amendment protected uses of the technology.

More than a good start, No FAKES appears to draw from many lessons learned over the past 20+ years pitting human and creative rights against the predatory “progress” of Big Tech. I join the Human Artistry Campaign in endorsing this bill and encourage the full Senate to pass it as soon as possible.


Image source

Thoughts on the No AI FRAUD Act

The acronym stands for No Artificial Intelligence Fake Replicas and Unauthorized Duplication. Introduced as a discussion draft by Rep. Maria Salazar et al., the No AI FRAUD Act would create a novel form of intellectual property in direct response to the use of AI to “clone” a likeness. With parallels to right of publicity (ROP) law, combined with a copyright-like, transferable ownership of rights, the No FRAUD bill is sweeping as currently proposed, citing a range of conduct, from deepfakes to create and distribute nonconsensual intimate material, to cloning an actor or singer’s voice for commercial exploitation.

In short, the law would prohibit replication of anyone’s likeness without permission, and then, the purpose of the unlicensed replication would determine the nature of the harm and available remedies. Although the intent of this bill is well-founded in addressing certain harms to individuals like performing artists, the bill’s current scope, combining permission and intent, and seeking to remedy a broad range of potential harms, raises some difficulties.

Permission vs. Intent

As discussed on this blog, Cyber Civil Rights Initiative (CCRI) leaders, Danielle Citron and Mary Anne Franks, have advocated a permission-based, rather than an intent-based cause of action for the nonconsensual distribution of intimate material, commonly referred to as “revenge porn.”[1] The CCRI has worked hard to demonstrate that merely distributing this material without permission is criminal, regardless of the intent to cause harm, and this makes sense in response to the nature of the conduct. But advancement in AI replication presents a unique challenge to the principle that permission is universally the signal event triggering liability.

No question that the guy who shares intimate material of an ex, a girl at school, a work colleague, etc. should be held accountable solely on the basis that he lacked permission, and this is valid whether the visual material is real (i.e., photographic) or synthetic (i.e., produced with an AI). First Amendment defenses for this type of conduct have reasonably failed when various parties challenged the constitutionality of several of the “revenge porn” laws, now in force in 48 states. The permission principle in harassment-based complaints should not be disturbed by the No FRAUD Act, and Congress should likely avoid any temptation to combine the intent of this bill with current or developing federal prohibitions for “revenge porn.”

But the use of AI to replicate a likeness cannot so broadly be proscribed for all purposes. As the Motion Picture Association notes in its response to the bill, “… any legislation must protect the ability of the MPA’s members and other creators to use digital replicas in contexts that are fully protected by the First Amendment.” Notwithstanding contractual conflicts that may arise in the future among performers and producers, the MPA is right to note that AI cloning for expressive purposes that constitute protected speech should not be swept into the scope of legislation like the No FRAUD Act.

The example I often use with friends and colleagues is the movie or TV series that casts a public figure (let’s call him Donald Trump) in a light he might not appreciate. Expressive portrayals—factual, dramatic, or sardonic—of public figures are paradigmatic forms of protected speech, and this principle should not be altered by vesting new IP rights in persons, premised solely on the use of AI models to achieve the same expressive results historically created with old-school “movie magic.” In other words, Trump should no more be empowered to enjoin the use of his AI likeness to comment upon his role in society than he would have been allowed to stop Saturday Night Live from producing the sketches featuring Alec Baldwin.

Vesting new “likeness IP” rights in all persons is a reasonable response to the potential harms—both financial and reputational—that may be caused to millions of creative professionals and ordinary citizens. But these goals must allow for expressive uses of AI replication, adhering to longstanding contours protecting the speech right and controlling limits like libel and defamation.

In another example, imagine a documentary about the events of January 6th that includes reenactments based on witness testimony describing the actions of the former president during the attack on the Capitol. The documentary producer’s legal responsibility to balance faithful reportage with reasonable expressive license should not be altered solely on the basis that the film may use generated AI likenesses of Trump, Meadows, Hutchinson, Ivanka, et al. rather than actors to produce the same scene.

With a documentary film, one can imagine a legal requirement to inform the viewer that what they are seeing is an AI-generated reenactment (rather than, say, someone’s cellphone recording), but no such requirement should apply to a non-documentary audiovisual work. In either case, misinformation is already thriving in a dangerously blurry space between fact and fiction and a decline in media literacy fostered by the ability of any individual to distribute any fragment of material without context on a public platform. In other words, the documentarian can do her job right, but she cannot stop every potential bad actor from taking a segment of that reenactment and publishing it in a manner that changes its context and feeds a false narrative. (Thank you to all those who celebrated “remix culture” as a rejection of copyright law.)

AI Generated Likeness and the Misinformation Problem

Regarding the documentary example, the preamble of the No FRAUD working draft cites the use of unauthorized likenesses for the purpose of disinforming the public about matters of a factual or newsworthy nature. And while this is indeed a problem that AI tools will be used to exacerbate, it is a challenge that should be addressed separately from the intent and sweep of the No FRAUD proposal. Congress must recognize that the capacity to cause widespread, societal harm through disinformation by means of AI likeness replication is too hazardous and too rampant to remedy on a case-by-case, civil-liability basis. And that’s even if the producer of the fake is operating within the reach of U.S. law rather than, say, China or Russia.

Further, there is a legal tension created by comparing and contrasting the entertainment satirist with the news provocateur who trades in misinformation, as we see in the claims of slander against Tucker Carlson of FOX News in 2020. Arguing that “no reasonable person” would truly believe everything Carlson says, Fox’s attorneys successfully defended the network against any cause of action, and while this may be a reasonable finding based on the facts presented, it is one of many examples in which the lines separating opinion, criticism, satire, and information have been blurred beyond relevance vis-à-vis public perception. Now add the ability to cheaply recreate anyone’s likeness with sophisticated AI, and how far can a “news” organization push the line under the same protections that apply to the satirical filmmaker or The Daily Show?

Of course, my references here to Trump and Carlson allude to a much bigger, underlying problem—namely that Congress is not going to effectively address the use of AI likeness for misinformation unless Members on both sides can agree to mutually define fact and fiction. Not to say that Dems never cling to narratives built on some rather shaky foundations, only that it’s hard to compete with the existential lies of whatever the hell the GOP has become in the thrall of Trumpism. That and no American political figure has ever proven to be so thin-skinned in response to criticism.

For the moment, my own view is that a bill like No FRAUD should be narrowly tailored to vest new “likeness IP” in persons to proscribe compelled speech and commercial exploitation that meets standards akin to unfair competition. Further, because such uses require a court to weigh the intent of likeness replication, this new right should not preempt or alter anti- “revenge porn” legislation, where lack of permission must remain the sole cause of action. While I see the potential of this bill to protect various artists and non-artists with novel rights against novel harms, difficulties like those addressed in this post must help define the contours of those new rights.


[1] “Revenge porn” is a problematic term because it implies intent to harm, which is anathema to the principle that lack of consent is the cause of action.

Image by: meyerandmeyer