No FAKES Act Matched in House Bill to Address Gen AI Replication

no fakes

On Monday, beloved actor James Earl Jones passed away at age 93, but in 2022, he signed an agreement with LucasFilms to allow the voice of Darth Vader to live on through Gen AI replication. Jones’s permission to replicate his voice is a bittersweet prelude to today’s news from Capitol Hill, where the House of Representatives introduced its own No FAKES Act to prohibit the unlicensed replication of any person’s likeness or voice. Sponsored by Reps. Salazar, Dean, Moran, Morelle, and Wittman, the House bill is identical to the Senate No FAKES Act introduced in late July and, so, demonstrates a bicameral, as well as bipartisan, sense of urgency to address misuse of Gen AI for this purpose.

To recap, No FAKES establishes a new property right in the likeness of any person and prohibits unauthorized replication of a likeness, which includes voice. Historically, likeness has only been protected on a limited basis by a patchwork of state Right of Publicity (ROP) laws, typically prohibiting unauthorized use of a celebrity likeness for commercial/advertising purposes. But the unprecedented capability of Gen AI to be used by anyone to replicate the likeness of anyone—and which will exacerbate the reality-bending world of online “information”—has prompted Congress to move swiftly and, in my view, creatively.

It was July 2023 when the idea of a federal ROP law was discussed during a hearing held by the House Judiciary Committee Subcommittee on Intellectual Property. At the time, I imagined this was a prelude to years of haggling on Capitol Hill while Gen AI developers proceeded at internet speed to wreak havoc with tools to produce more advanced “deepfakes.” Instead, the introduction of No FAKES in the Senate just one year later—and now, the same bill in the House less than two months after that—reveals both seriousness and deftness in legislators’ zeal to confront the issue. Rather than approach the matter as one to be remedied by a federal ROP law, Congress, with input from various stakeholders, has responded to the novelty of the challenge with novel legislation, drawing upon principles found in ROP, trademark, and copyright law.

If passed, No FAKES would operate akin to ROP, but it automatically applies to every citizen, and unlawful replication is not limited to commercial/advertising purposes. At the same time, because many misuses of Gen AI replication have both reputational and commercial implications, No FAKES shares a kinship with trademark, which is a creature of the Commerce clause. And finally, the new right is copyright-like as a property right which vests in the individual, may be licensed for various uses, and is descendible to heirs and assigns with certain limits and conditions unique to protecting likeness.

Opposition Is Familiar but the Battlefield Is Different

Many of the usual suspects representing Big Tech, including the newly formed (I can’t believe they called it this) Chamber of Progress, will likely raise constitutional challenges to No FAKES, leaning hard into the refrain that the new likeness right will chill protected speech. As to the merits of that argument, the text of the bill already includes well-crafted, First Amendment-based exceptions; and as a PR message, I believe Big Tech is refreshingly at a disadvantage. Concerns over abuse of Gen AI encompass a broad range of Americans—from professional creators to parents seeing how easily children can be sexually exploited—and in general, people just aren’t buying Big Tech’s “make life better” rhetoric anymore.

Examples of legitimate innovation (e.g., Jones permitting Darth Vader to continue, or Randy Travis overcoming physical voice loss) will entail permission of the person whose likeness or voice is being replicated. Yet, in response to the many harms which may be caused by unlicensed Gen AI replication, AI defenders will promote the overbroad refrain that “innovation” must be allowed to flourish — but of course, “innovation” is Big Tech’s euphemism for “profitability at any cost.” Congress is still playing catch-up to address myriad harms fostered by pre-AI social media and is, therefore, reluctant to repeat the mistakes of the late 1990s by allowing Gen AI “room to grow” without restrictions.

Interestingly, Chamber of Progress appears designed to frame the multi-billion-dollar AI gamble as socially and politically “progressive,” a strategy belied by its advocating broad liability shields for AI developers akin to Section 230 of the CDA and Section 512 of the DMCA. In fact, that view aligns perfectly with Open AI CEO Sam Altman suggesting that it is impossible to develop without free use of copyrighted works, or with investor Marc Andreesen writing a smug and erroneous manifesto as a plea for continued laissez-faire policy in all things tech. If there is anything “progressive” about Gen AI, Chamber of Progress will need to produce more than worn out rhetoric to prove it.

We’ve been here and done this, but No FAKES is a bill with a lot of political momentum. The likelihood that many citizens will oppose a prohibition on the unlicensed use of their own, or their children’s, likenesses seems low to the point of futility. We’ll see what comes, but by my lights, No FAKES is destined to become law.


Image by: nikolay100

TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230

Section 230

Last week, the Third Circuit Court of Appeals issued an opinion regarding Section 230 of the Communications Decency Act. It may be the strongest affirmation to date that the statute does not provide a blanket liability shield for all social platforms regardless of their conduct. Specifically, §230(c)(1) only immunizes platforms for liability that may arise from other parties’ speech, not from the platform’s own speech. And although the platforms have sought to argue that their “recommendation” algorithms, which push content to users, do not constitute speech, the courts aren’t buying it.

In the case Anderson v. TikTok, the appeals court reversed the lower court finding that the platform was automatically immunized against a liability claim involving the death of a child who attempted one of the many dangerous “challenges” that appear on social media. In this case, Nylah Anderson, age 10, died by accidentally hanging herself when she tried the “Blackout Challenge,” which dared people to asphyxiate themselves until they passed out. At issue for TikTok is not the challenge itself, started by an unknown third-party, but the “For You Page” algorithm which “recommended” the challenge to Anderson. Judge Matey, in a strident concurrence with the circuit court opinion, writes the following:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Though the reference to St. Augustine implies a religious moralizing I might omit, Judge Matey’s accusation that social platforms host a “cauldron” of dangerous, illegal, and depraved material behind a veil of social good and constitutional rhetoric is indisputable. As a legal matter, had Anderson discovered the video challenge (e.g., via search), TikTok would likely be immunized by §230, but because a “recommendation” algorithm factored in the child’s conduct resulting in her death, this is an important distinction that could more clearly articulate a shift in judicial review of the statute and, we should hope, an overdue change in platform governance.

As Judge Matey further states in his concurrence, TikTok’s presumed immunity under §230 in this case is “…a view that has found support in a surprising number of judicial opinions dating from the early days of dial-up to the modern era of algorithms, advertising, and apps.” That view is properly dimming now, and by my reckoning, the Supreme Court will go where the Third Circuit went last week. In a pair of nearly identical cases, Gonzalez v. Google and Twitter v. Taamneh (2022), the plaintiffs, on behalf of victims of two ISIS-related terror attacks, sought to hold the platforms accountable for “recommending” ISIS recruiting videos. But because those claims relied substantially on meeting the standard for “aiding and abetting” under criminal law, the Court found little plausible claim for relief and, therefore, declined to address the question of §230 immunity.

But if Anderson (or a similar case) goes to the Supreme Court, I believe the justices will have little difficulty finding that a “recommendation” algorithm promoting a video challenge that led to a child’s death is a foundation for a liability case to proceed. As the Court stated in Taamneh, “When there is a direct nexus between the defendant’s acts and the tort, courts may more easily infer such culpable assistance.” In Anderson, with no other party acting as the direct cause of the child’s death, the facts are even simpler, revealing a clear nexus between the video challenge “recommended” by the platform and the accidental suicide. Further, this July, the Court held in the unanimous Moody v. NetChoice decision that social platforms “shape other parties’ expression into their own curated speech products.”[1] Under that rule, the Third Circuit finds that TikTok’s “recommendation” of the Blackout Challenge to Niyah Anderson plausibly constitutes the platform’s own speech, for which it may be held liable.

The reason I keep putting “recommended” in quotes is that at the time SCOTUS granted cert in the Taamneh and Gonzalez cases, I wrote a post opining that the courts, policymakers, et al. should take a jaundiced view of this too friendly term to describe an insidious function of social media. It is no longer controversial to say that platform operators manipulate what users see and hear, or that this manipulation can lead to disastrous results from disinformation campaigns in the political arena to drug-related deaths to suicide by little girls.

It is a familiar refrain that it takes a tragedy, or many tragedies, to change policy, and with the story of Nylah Anderson, and the many young victims she represents, we may finally see Big Tech’s hypocrisy on speech collapse under the weight of its own absurdity. The major platforms have played games with the First Amendment and §230 for nearly 20 years—conflating their business interests with users’ speech rights or asserting their own speech rights when necessary or asserting that nothing they do is their own speech—all depending on which potential liability the company seeks to avoid. Further, that confusion has not been helped in recent years by certain politicians who misstate the operation of the speech right to create political theater around allegations of bias.

Out of all that mess, it is notable that Justice Thomas, since at least 2020,[2] has restated the observation that online platforms will avail themselves of constitutional protection to engage in conduct like algorithmic “recommendation” but then invert the argument to shroud itself in the §230 shield. And then, the courts will stop a liability claim from even proceeding. As Congress, the Supreme Court, and now the Third Circuit have all reiterated, no industry in the country enjoys that kind of immunity, and perhaps this claim against TikTok will be the case that finally ends this unfounded and unreasonable privilege for online platforms.


[1] On a side note, this is reminiscent of the “selection and arrangement” doctrine in copyright law, which finds “expression” in the choices made by the author who engages in that conduct. All copyrightable expression is a form of speech.

[2] See dissent on the grant of certiorari in Malwarebytes v. Enigma.

Photo by: 

No FAKES Act Introduced:  A Big Deal for Performing Artists and Everyone Else

No FAKES

Ever since the generative artificial intelligence (GAI) controversy began heating up, I’ve had several conversations with friends and colleagues who are voice actors and have had to disappoint them by repeating the fact that copyright law does not protect a person’s “likeness,” which includes one’s voice. And I’ve had similar conversations with colleagues focused on replication of likeness for the production of nonconsensual pornography. Nevertheless, the instinct makes sense—that the same human-centric principles that protect “authorship” might apply to the human’s likeness as well. Now, that basic sense of justice is articulated in a new bill introduced in the Senate.

Historically, the protection of likeness has been the subject of a relatively narrow area of law called the right of publicity (ROP), a common-law right with statutory provisions in 25 states—and narrow because ROP typically applies to the unauthorized use of celebrity likeness for commercial advertising purposes. But with the introduction of the No FAKES Act, Congress proposes to substantially change the protection of individual likeness in direct response to the capacity of GAI to conjure just about anything from fake news to fake performances by actors and musicians.

Introduced by Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC), the acronym stands for Nurture Originals, Foster Art, and Keep Entertainment Safe Act. The heart of the bill establishes a property right in the likeness of any person, living or dead, and prohibits digital replication without permission. Similar to copyright rights, the “digital replication right” is vested in every individual regardless of whether one commercially exploits one’s own likeness, and the right is licensable and transferrable to heirs and assigns after death. Post-mortem rights would last 10 years but may be extended through a renewal and registration process administered by the U.S. Copyright Office if the right holder can show active and authorized public use of the voice or visual likeness.

The bill anticipates legitimate creative and newsworthy uses of unlicensed replication and exempts a broad range of uses for purposes like news, documentary, parody, etc. For a purpose to be “newsworthy,” the replicated individual must be the subject of the material created—e.g., a story about Hugh Jackman, not merely a replication of him “cast” for free in your film or commercial. Further, the bill explicitly states that creating a false impression that a given replication is an “authentic” recording of the individual will still trigger liability under the new law. Thus, the documentarian who uses a replication in a scene that looks like real surveillance or cellphone footage will probably need to identify that material as AI generated to avoid liability.

Remedies for violation of the digital replication right range from damage awards of $5,000 per depiction made by individuals or by online providers; and $25,000 per depiction by corporate entities other than online providers. Plaintiffs may also seek actual damages and attorney fees, and courts may award punitive damages where unlawful replications entail malice, fraud, or willful ignorance that the use violated the law.

Finally, taking a page from the Copyright Act, No FAKES contains a DMCA-like takedown provision for removal of content alleged to be an unlawful replication, and this provision includes maintenance by the Copyright Office of a database of “agents” to whom such complaints must be submitted. Likewise, familiar safe harbor provisions apply to both product developers and platforms that may, without the knowledge of these providers, be used to produce or distribute unlicensed replications.

Given Silicon Valley’s poor record for compliance with the DMCA for copyright owners, the takedown provisions in No FAKES naturally raises questions about everyday removal of material, which is often the first, if not the main, remedy non-performers will care about. Regardless, from my perspective, the bill both recognizes a wide range of abuses of GAI replication and exempts or limits liability for an appropriate range of legitimate, First Amendment protected uses of the technology.

More than a good start, No FAKES appears to draw from many lessons learned over the past 20+ years pitting human and creative rights against the predatory “progress” of Big Tech. I join the Human Artistry Campaign in endorsing this bill and encourage the full Senate to pass it as soon as possible.


Image source