No FAKES Act Matched in House Bill to Address Gen AI Replication

no fakes

On Monday, beloved actor James Earl Jones passed away at age 93, but in 2022, he signed an agreement with LucasFilms to allow the voice of Darth Vader to live on through Gen AI replication. Jones’s permission to replicate his voice is a bittersweet prelude to today’s news from Capitol Hill, where the House of Representatives introduced its own No FAKES Act to prohibit the unlicensed replication of any person’s likeness or voice. Sponsored by Reps. Salazar, Dean, Moran, Morelle, and Wittman, the House bill is identical to the Senate No FAKES Act introduced in late July and, so, demonstrates a bicameral, as well as bipartisan, sense of urgency to address misuse of Gen AI for this purpose.

To recap, No FAKES establishes a new property right in the likeness of any person and prohibits unauthorized replication of a likeness, which includes voice. Historically, likeness has only been protected on a limited basis by a patchwork of state Right of Publicity (ROP) laws, typically prohibiting unauthorized use of a celebrity likeness for commercial/advertising purposes. But the unprecedented capability of Gen AI to be used by anyone to replicate the likeness of anyone—and which will exacerbate the reality-bending world of online “information”—has prompted Congress to move swiftly and, in my view, creatively.

It was July 2023 when the idea of a federal ROP law was discussed during a hearing held by the House Judiciary Committee Subcommittee on Intellectual Property. At the time, I imagined this was a prelude to years of haggling on Capitol Hill while Gen AI developers proceeded at internet speed to wreak havoc with tools to produce more advanced “deepfakes.” Instead, the introduction of No FAKES in the Senate just one year later—and now, the same bill in the House less than two months after that—reveals both seriousness and deftness in legislators’ zeal to confront the issue. Rather than approach the matter as one to be remedied by a federal ROP law, Congress, with input from various stakeholders, has responded to the novelty of the challenge with novel legislation, drawing upon principles found in ROP, trademark, and copyright law.

If passed, No FAKES would operate akin to ROP, but it automatically applies to every citizen, and unlawful replication is not limited to commercial/advertising purposes. At the same time, because many misuses of Gen AI replication have both reputational and commercial implications, No FAKES shares a kinship with trademark, which is a creature of the Commerce clause. And finally, the new right is copyright-like as a property right which vests in the individual, may be licensed for various uses, and is descendible to heirs and assigns with certain limits and conditions unique to protecting likeness.

Opposition Is Familiar but the Battlefield Is Different

Many of the usual suspects representing Big Tech, including the newly formed (I can’t believe they called it this) Chamber of Progress, will likely raise constitutional challenges to No FAKES, leaning hard into the refrain that the new likeness right will chill protected speech. As to the merits of that argument, the text of the bill already includes well-crafted, First Amendment-based exceptions; and as a PR message, I believe Big Tech is refreshingly at a disadvantage. Concerns over abuse of Gen AI encompass a broad range of Americans—from professional creators to parents seeing how easily children can be sexually exploited—and in general, people just aren’t buying Big Tech’s “make life better” rhetoric anymore.

Examples of legitimate innovation (e.g., Jones permitting Darth Vader to continue, or Randy Travis overcoming physical voice loss) will entail permission of the person whose likeness or voice is being replicated. Yet, in response to the many harms which may be caused by unlicensed Gen AI replication, AI defenders will promote the overbroad refrain that “innovation” must be allowed to flourish — but of course, “innovation” is Big Tech’s euphemism for “profitability at any cost.” Congress is still playing catch-up to address myriad harms fostered by pre-AI social media and is, therefore, reluctant to repeat the mistakes of the late 1990s by allowing Gen AI “room to grow” without restrictions.

Interestingly, Chamber of Progress appears designed to frame the multi-billion-dollar AI gamble as socially and politically “progressive,” a strategy belied by its advocating broad liability shields for AI developers akin to Section 230 of the CDA and Section 512 of the DMCA. In fact, that view aligns perfectly with Open AI CEO Sam Altman suggesting that it is impossible to develop without free use of copyrighted works, or with investor Marc Andreesen writing a smug and erroneous manifesto as a plea for continued laissez-faire policy in all things tech. If there is anything “progressive” about Gen AI, Chamber of Progress will need to produce more than worn out rhetoric to prove it.

We’ve been here and done this, but No FAKES is a bill with a lot of political momentum. The likelihood that many citizens will oppose a prohibition on the unlicensed use of their own, or their children’s, likenesses seems low to the point of futility. We’ll see what comes, but by my lights, No FAKES is destined to become law.


Image by: nikolay100

Human Voice Gaining Protection in Confronting Generative AI

Voice

Last week, Tennessee passed the ELVIS Act to expand its statutory right of publicity (ROP) law to include voice as a protected aspect of an individual’s “likeness.” In response to artificial intelligence enabling more precise replication of specific, human sounding voices, it is little surprise that the music powerhouse state has taken swift action to explicitly include voice among the property rights protected by its ROP statute. With $9.7 billion output to the Nashville region alone by the music industry, Tennessee lawmakers took less than three months to introduce and pass the Ensuring Likeness, Voice, and Image Security (ELVIS) Act, and they could not have been luckier to have the acronym work so perfectly!

Tennessee’s existing ROP law already proscribed unlicensed use of “likeness” for a wide range of commercial purposes, and the ELVIS amendments create a civil action of potential liability for publication, performance, or transmission, or for making available an algorithm, software, tool, et al. with the primary purpose or function of producing an unauthorized “likeness.” This addition is notable because it creates a potential liability for the generative AI developer whose interest may be producing the next Mary Kutter song without Mary Kutter.

Although Tennessee is not the first state to include voice in the definition of “likeness” for the purpose of ROP law, the support from the music industry is indicative that the ELVIS Act is the first to directly confront the prospect of generative AI replicating artists without consent. We applaud Tennessee’s swift and thoughtful bipartisan leadership against unconsented AI deepfakes and voice clones and look forward to additional states and the US Congress moving quickly to protect the unique humanity and individuality of all Americans,” stated Mitch Glazier, chairman and CEO of the Recording Industry Association of America.

Widening the lens to all Americans and early proposals for a federal right of publicity, the prospect of generative AI being used either to replicate a “likeness” that is not yet recognizable; or to produce synthetic “performers” to displace humans are two challenges not easily addressed by traditional ROP doctrines. Historically, the application of these various laws is clearest when the “likeness” of a celebrity or public figure is used for commercial advertising or endorsement. For instance, non-famous persons, even in states with strong ROP statutes, have a higher burden to show reputational harm.

Thus, vesting a property right in one’s voice is a step in the right direction, but it is the various uses of a “likeness” leading to causes of action that get tricky. In its article about the ELVIS Act, Billboard cites a speech by president and CEO of National Music Publishers Association (NMPA) David Israelite stating that the much larger motion picture industry opposes a federal right of publicity. I addressed some of the reasonable concerns motion picture producers might raise with legislation proscribing the use of generative AI for “expressive purposes,” and wherever one leans on these questions, artificial voice exemplifies the difficult nature of adopting policies around generative AI in the creative industries.

As a general view, I stand with creators who see the potential for generative AI to displace human creators and maintain that there is nothing to be gained—culturally or economically—in a future creative sector with dramatically fewer professionals. But the ELVIS Act itself highlights the challenge of writing policy that looks beyond the current population of famous or semi-famous professionals. In this context, perhaps the audiobook narrators provide some insight. I’ve talked to several voice actor friends and colleagues in recent months, and after explaining why copyright doesn’t typically protect their interests and we turn to the subject of ROP, I then disappoint them further, explaining why those laws don’t quite address the prospect of scraping voice recordings to train a generative AI.

Award-Winning Book Narrator Encounters Her Virtual Self?

I recently spoke to audiobook narrator Hillary Huber, who discovered that her voice may be the unauthorized source of a Virtual Voice, a service provided to self-published authors on the Kindle Direct Publishing (KDP) platform. The Virtual Voice concept uses synthetic voice technology to enable the self-published author of a modestly selling title to create an audiobook she could otherwise not afford to produce. But Virtual Voice, a feature of Amazon+ Publishing, naturally begs two questions:  first, whose voices are used to train the AI? And second, is the model a harbinger of doom for professional book narrators throughout the industry?

Huber was alerted to the possibility of her vocal doppelganger by a friend sharing links to several books on the KDP platform and telling her, “This is your voice!”  But, as Huber explained to me, “Because our own voices never sound the same to ourselves as to others, I asked several colleagues to weigh in, and they were unanimous in their opinion that it was a version of me—not just the sound, but also certain markers like cadence and inflection.”

To my ear, which has not been trained on the more than 700 books Huber has narrated, I would describe the Virtual Voice sample as sounding either like a mediocre computer rendering of her, or like a recording of her voice with a computerized filter distorting the sound. The latter, of course, did not occur because Huber did not narrate the book in question, but whether Virtual Voice was “trained” without license using the voices of professional narrators like Huber and her colleagues is a question worth asking.

More broadly, as a matter of law and policy, the book narration business is perhaps instructive to other creators, including other voice actors, musical performers, et al. One difficulty, it seems, lies in distinguishing among the unknown, the semi-famous, and the famous, and Huber confirmed for me that the book narration world is indeed segmented into these three strata. Many unknown narrators earn modest incomes recording a broad range of modestly selling audiobooks; a small group of regulars like Huber can earn middle-class incomes reading more popular books; and, of course, celebrities are occasionally paid whatever they can negotiate to read bestsellers. Naturally, it is the narrator whose name and voice may not be widely recognizable, even among avid book listeners, who is most anxious about the prospect of losing her job to generative AI.

Additionally, when I asked Huber if she knew how many narrators are in her group I called the “recognizable regulars,” her guess was a surprisingly low number, well below 100 narrators. I figured the number would be small, but not that small, and this raises real concerns about the narration business. For one thing, Congress isn’t motivated to protect a handful of jobs. For another, even if the number were a few hundred voices producing a training dataset of, say, one-million popular books, that seems like a comparatively light task for a generative AI developer to create enough variety in synthetic voices to replace the narration workforce.

In that regard, while it may be tempting for some book narrators to license the use of their voices for a purpose like Virtual Voice, it is impossible to see how this does not very quickly obviate the need for any human narrators to produce audiobooks, or even license their voices for generative AI for long. At a certain threshold, the AI is expected to self-train, suggesting that a handful of narrators might obtain licensing deals one time and then nobody will ever do so again.

Assuming that’s a fair summary, some might ask why Congress should consider a provision like the ELVIS Act as a starting point for a federal ROP law with an aim to protect more than today’s musical performers. In my view, the answer goes back to considering future generations of creators. If there is one consistent feature in Big Tech’s influence on the creative sector, it is that the major platforms developed thus far are highly effective at cannibalizing existing works of great value while shrinking opportunities for new creators at every level.

If the U.S. is going to continue to foster new generations of professional creators, it is necessary that policy in this area does not focus too narrowly on the current population of recognizable and famous creators. Here, although copyright law does not apply to the property rights in “likeness,” its foundational purpose to “promote progress” might serve as a guiding principle in crafting new federal laws that vest property rights in our images, names, and voices.


Photo by: Andrew282

The Mugshot Heard ‘round the World

It was no surprise that the mugshot was immediately copied onto tees, hats, coffee mugs, etc. and sold to Americans who see either a martyr or a traitor in the same image. It was also no surprise that Team Trump produced merch of its own to sell for campaign (a.k.a. criminal defense) fundraising purposes. But these and other uses of the photograph have fostered some legal discussions on chat boards and elsewhere as to who, if anyone, has the right to control the exploitation of the mugshot. And so, I offer my own takes for what they’re worth.

Who Owns the Copyrights in the Image?

This is actually two questions:  1) is the Trump mugshot copyrightable at all? and 2) if so, who would be the owner of the copyright? Opinions will vary, but in my view, there are several factors that militate against enforceable copyright in this photograph, which is tantamount to having no copyright at all. If any party could own the copyright, it would logically be the State of Georgia or Fulton County, but aside from the fact that neither entity is likely to file a registration application for the photo with the Copyright Office, there is arguably no basis for finding sufficient originality in the image.

The mugshot photo station at the jailhouse is presumably as static as a surveillance camera—arranged to capture the same, fact-intensive photo for a highly utilitarian, informative purpose. No human (e.g., officer or clerk) can reasonably claim to have made any creative choices to produce original expression in the Trump mugshot, and this militates against copyright rights, which would then automatically transfer to the state or county employer. If there is any expression in the image at all, it is arguably Trump’s “creative” choice to make the angry face. But although I have explored the question of co-authorship by subjects in photographs, this is 1) a thought experiment outside the bounds of case law; and 2) a theory that would likely find less foundation in an image that is more factual than expressive in nature.

For these reasons alone, I believe the image would not be copyrightable, even if the state entity were to try to register the photograph with the Copyright Office. But no matter what, there is no legal authority under which Trump could own the copyright.

Can the Trump Campaign Control the Merch?

On August 29, Trump campaign adviser Chris LaCivita posted on X, “If you are a campaign, PAC, scammer and you try raising money off the mugshot of @realDonaldTrump and you have not received prior permission…WE ARE COMING AFTER YOU…you WILL NOT SCAM DONORS.”

Notwithstanding the tongue-biting irony of Team Trump using the word scam, LaCivita’s message could be read as a valid warning to any parties that might pretend to be the Trump campaign, but that would be an odd statement in regard to the mugshot because this type of fraud has nothing to do with use of the photograph per se. If, instead, LaCivita means to imply that the Trump campaign has an exclusive right to sell “official” mugshot merchandise for commercial purposes—or to prevent use of the image to raise funds in opposition to Trump—then he’s dead wrong on the law, as that crowd so often is.

Trademark Law Does Nothing for Trump

Although it is permissible to register trademarks in certain words or images used in political campaigns (e.g., slogans or logos), there are both administrative and doctrinal reasons why the Trump campaign could not claim the mugshot as a service mark. As a practical matter, the trademark claimant must use the relevant mark in trade when applying for protection and then go through a rather lengthy process to affirm the mark remains in use—and use in a specific class (or classes) of goods and/or services.

But in this case, the instant the mugshot was shared with the world, it conveyed irreconcilably divergent meanings to the public. So, under trademark practice, could Trump assert the exclusive right to use the “mark” in a class called Multiply Indicted, Seditious Former Presidents? Probably not since no such class exists. But that’s generally what the image conveys to millions of Americans, and the purpose of trademark is to protect the earned integrity of brands, not to burnish the reputations of politicians reviled by more than half the population.

What About Trump’s Likeness?

It may not be Trump’s mugshot as IP, but it is certainly his mug, and doesn’t his right of publicity (ROP) allow him to control how his likeness is used? As discussed in context to artificial intelligence, ROP laws are statutory in half the states, common law elsewhere, and there is no federal ROP statute. Most importantly, though, ROP generally applies to commercial use of an individual’s likeness for endorsement or advertising purposes. Thus, Susan Scafidi, founder of the Fashion Law Institute is off the mark, as quoted in the New York Times stating, “Trump could, in theory, attempt to shut down sales of merch with his mug shot, not unlike the way Obama objected to appearing on a Weatherproof Garment Company billboard…”

I believe this is incorrect. Unauthorized use of a likeness (even of a political figure) for commercial advertising is likely to be a paradigmatic violation of ROP. So, if an entity were to use the Trump mugshot to promote its goods or services, Trump should have a strong legal foundation for stopping that use.[1] By contrast, reproducing the mugshot for the purpose of mocking, criticizing, or downright hating any political figure is protected speech at the core of the First Amendment, and Trump would have no legal foundation to enjoin such uses.

But what if the mugshot is reproduced (on merch or elsewhere) without accompanying commentary? If I walk through town wearing a tee shirt with the unaltered mugshot on it, observers who don’t know me would have no idea whether I am celebrating or denouncing the Georgia arraignment. So, does this ambiguity alter the First Amendment consideration such that Trump would have any grounds to stop the production of merchandise that merely reproduces the photo? Again, I would say no if only because the mugshot is a factual statement of extraordinary newsworthy value to the public. Thus, the production and distribution of merchandise bearing no communication other than the image should still be protected by the speech and press rights, even if the right of redress is not implicated.

So, that’s my 50 cents on some of the legal discussion surrounding this image, which may one day be more widely reproduced than Alberto Korda’s photograph of Che Guevara.[2] Of course, this is all nerdy food for thought because it’s hard to imagine that any of these questions will ever be presented in court. Even if Team Trump could show standing, they have bigger sheep to fleece and zero hope of controlling the perception of millions that a mugshot is usually just a photograph of a criminal.


[1] It is of course possible to blur the line between a company’s politics and its marketing, which would result in a fact-intensive inquiry into the matter. Likewise, a not-for-profit could promote a policy message that Trump does not endorse and use the mugshot to illustrate the opposition, and this should not be a violation of ROP.

[2] Ironically, this is actual and rampant infringement of the photographer’s copyright rights.