It Looks Like the EFF is Pro FAKES

FAKES

When it comes to cyber policy and anything like intellectual property, the Electronic Frontier Foundation’s critiques are so predictable, they might as well use ChatGPT to write their blog. For instance, in opposing the NO FAKES Act, an April post by Corynne McSherry selects items from that same menu of responses EFF has used to oppose any form of online copyright enforcement. In this instance, she orders up the following:  pretend to want a “better” bill; cite scary hypotheticals; pretend to care about creators; and, of course, insist that the speech right is in jeopardy.

For review, the NO FAKES Act would establish a new property right in every individual’s likeness, including one’s voice. As opined on this blog, its mechanisms comprise a thoughtful response to a novel challenge—namely the ability of just about any party to use generative artificial intelligence (GAI) to replicate the likeness of any person. The hazards of replication are obvious to the common-sense observer—from intensifying disinformation to commercial uses without permission to sexual predation, scams, and harassment. But as usual, the EFF advocates the interests of the tech industry by framing its critiques in a rhetoric that sounds pro-individual or even (ha!) pro-artist.

McSherry’s broadside at NO FAKES employs the tactic of alluding to hypothetical negative consequences, which Congress has (of course) failed to consider. Thus, EFF insists, as it did with bills like the CASE Act, that NO FAKES, as written, should be balled up, and that Congress should start over from scratch. But those of us familiar with the organization recognize that this pretense is there to mask the EFF’s view that the whole idea of a likeness right should be scuttled. If past is prologue, the EFF will never endorse any version of a law to remedy unlicensed AI likeness replication and, possibly, never engage as a good-faith negotiator on the subject.

Predictably, McSherry’s post elides important details about NO FAKES. I won’t unpack them all, but in one example, she writes, “The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies.” Of course, she doesn’t mention that although the 70-year term is the maximum, the likeness right would have to be renewed post-mortem and, similar to trademarks, renewal is conditioned on showing that the likeness is still in “authorized public use.”

But it was another paragraph that struck me as vintage EFF—an implication that existing right of publicity (ROP) laws in the states are already harmful to speech and that NO FAKES can only make matters worse. McSherry writes:

… it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songsmagazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died.

But following the links in that first paragraph, one finds a couple of unmeritorious claims of ROP along with a couple of EFF’s opinions about how ROP law should be applied in context to the speech right. In the first instance, weak cases that do not prevail only disprove the allegation that a law has “reached beyond” its intent. And in the second instance, while the EFF is entitled to its opinion, its interpretation of the speech right is so expansive that it is unremarkable when the courts so often disagree with their positions.

In fact, the EFF’s overbroad concept of the speech right is one reason I say it is being disingenuous in asking for a revised likeness bill. NO FAKES arguably provides better guidance on the use of AI replicas for protected speech than ROP case law, but although McSherry acknowledges these provisions, she states, “…interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.” That’s code for “let’s not have anything like this law” because of course all laws need to be interpreted and, yeah, lawyers are usually involved.

Another hypothetical includes the familiar, and laughable, implication that the EFF cares about creators and performers…

People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image.

While it is true that an individual could over-license the use of her likeness to another party, this is no different than licensing traditional forms of intellectual property. That an owner might give away too much is a consideration of the owner’s savvy and legal representation, but not a rationale to oppose the right being established in the first place. This complaint is a rehash of the fallacy that copyright rights are bad because some parties have cajoled artists into signing over more than they should. As applied to NO FAKES, I suspect people will favor the right to control their own likenesses and then worry about licensing, if that becomes an issue.

Finally, McSherry’s post repeats the same old prediction that NO FAKES will lead to platforms removing some undefinable, yet unacceptable, volume of protected speech. It’s almost surprising that EFF remains committed to this message when social media is clearly overflowing with so much protected hogwash—and when the major platforms are increasingly taking down innocuous posts without any rationale or transparency. A casual review of the current state of “information” on social platforms can only support the rational prediction that AI generated likenesses will exacerbate the problem. At the same time, EFF’s claim to defend “new creavity” is overstated when even protected uses of AI likenesses are often little more than brief diversions of limited cultural, and no informational, value.

For every potentially legal use of AI likeness, there are dozens of ways for scammers, foreign adversaries, predators, and unscrupulous business operators to use the technology to cause serious harm. But, true to form, the EFF asks that we ignore evidence of the damage being done and imagine instead that any remedy must be worse than the disease. Just off the cuff, they’ve used similar tactics to be wrong about CASE Act, Section 1201, Section 230, site-blocking, and Controlled Digital Lending. So, it is hardly a bold speculation to say that they’re wrong about NO FAKES.


Image source by: maxxyustas

Podcast: AI and Voice Replication with Tim Friedlander

Friedlander

In this podcast, I talk with Tim Friedlander, voice actor, musician, and founder of the National Associaion of Voice Actors (NAVA). Tim joined me to talk about AI — its potential threats to his profession, his experience meeting on Capitol Hill, and his views on why this subject matters.

Contents

Voice
  • 00:32 – Tim’s background.
  • 03:07 – Political voiceovers.
  • 04:31 – Voice acting is acting.
  • 06:20 – About NAVA.
  • 10:25 – Size of NAVA and the market.
  • 12:35 – Experiences on the Hill.
  • 17:04 – Economic value of the market.
  • 18:53 – Resistance to the cause.
  • 21:46 – The challenge does not end with licensing.
  • 25:24 – What’s resonating on the Hill.
  • 28:55 – No FAKES Act.
  • 33:29 – Reasons why this conversation matters.
  • 40:15 – AI as a tool for creators.
  • 44:50 – Is it too late to respond?
  • 48:45 – The climate has changed for Big Tech.
  • 55:30 – No FAKES reprise.

No FAKES Act Matched in House Bill to Address Gen AI Replication

no fakes

On Monday, beloved actor James Earl Jones passed away at age 93, but in 2022, he signed an agreement with LucasFilms to allow the voice of Darth Vader to live on through Gen AI replication. Jones’s permission to replicate his voice is a bittersweet prelude to today’s news from Capitol Hill, where the House of Representatives introduced its own No FAKES Act to prohibit the unlicensed replication of any person’s likeness or voice. Sponsored by Reps. Salazar, Dean, Moran, Morelle, and Wittman, the House bill is identical to the Senate No FAKES Act introduced in late July and, so, demonstrates a bicameral, as well as bipartisan, sense of urgency to address misuse of Gen AI for this purpose.

To recap, No FAKES establishes a new property right in the likeness of any person and prohibits unauthorized replication of a likeness, which includes voice. Historically, likeness has only been protected on a limited basis by a patchwork of state Right of Publicity (ROP) laws, typically prohibiting unauthorized use of a celebrity likeness for commercial/advertising purposes. But the unprecedented capability of Gen AI to be used by anyone to replicate the likeness of anyone—and which will exacerbate the reality-bending world of online “information”—has prompted Congress to move swiftly and, in my view, creatively.

It was July 2023 when the idea of a federal ROP law was discussed during a hearing held by the House Judiciary Committee Subcommittee on Intellectual Property. At the time, I imagined this was a prelude to years of haggling on Capitol Hill while Gen AI developers proceeded at internet speed to wreak havoc with tools to produce more advanced “deepfakes.” Instead, the introduction of No FAKES in the Senate just one year later—and now, the same bill in the House less than two months after that—reveals both seriousness and deftness in legislators’ zeal to confront the issue. Rather than approach the matter as one to be remedied by a federal ROP law, Congress, with input from various stakeholders, has responded to the novelty of the challenge with novel legislation, drawing upon principles found in ROP, trademark, and copyright law.

If passed, No FAKES would operate akin to ROP, but it automatically applies to every citizen, and unlawful replication is not limited to commercial/advertising purposes. At the same time, because many misuses of Gen AI replication have both reputational and commercial implications, No FAKES shares a kinship with trademark, which is a creature of the Commerce clause. And finally, the new right is copyright-like as a property right which vests in the individual, may be licensed for various uses, and is descendible to heirs and assigns with certain limits and conditions unique to protecting likeness.

Opposition Is Familiar but the Battlefield Is Different

Many of the usual suspects representing Big Tech, including the newly formed (I can’t believe they called it this) Chamber of Progress, will likely raise constitutional challenges to No FAKES, leaning hard into the refrain that the new likeness right will chill protected speech. As to the merits of that argument, the text of the bill already includes well-crafted, First Amendment-based exceptions; and as a PR message, I believe Big Tech is refreshingly at a disadvantage. Concerns over abuse of Gen AI encompass a broad range of Americans—from professional creators to parents seeing how easily children can be sexually exploited—and in general, people just aren’t buying Big Tech’s “make life better” rhetoric anymore.

Examples of legitimate innovation (e.g., Jones permitting Darth Vader to continue, or Randy Travis overcoming physical voice loss) will entail permission of the person whose likeness or voice is being replicated. Yet, in response to the many harms which may be caused by unlicensed Gen AI replication, AI defenders will promote the overbroad refrain that “innovation” must be allowed to flourish — but of course, “innovation” is Big Tech’s euphemism for “profitability at any cost.” Congress is still playing catch-up to address myriad harms fostered by pre-AI social media and is, therefore, reluctant to repeat the mistakes of the late 1990s by allowing Gen AI “room to grow” without restrictions.

Interestingly, Chamber of Progress appears designed to frame the multi-billion-dollar AI gamble as socially and politically “progressive,” a strategy belied by its advocating broad liability shields for AI developers akin to Section 230 of the CDA and Section 512 of the DMCA. In fact, that view aligns perfectly with Open AI CEO Sam Altman suggesting that it is impossible to develop without free use of copyrighted works, or with investor Marc Andreesen writing a smug and erroneous manifesto as a plea for continued laissez-faire policy in all things tech. If there is anything “progressive” about Gen AI, Chamber of Progress will need to produce more than worn out rhetoric to prove it.

We’ve been here and done this, but No FAKES is a bill with a lot of political momentum. The likelihood that many citizens will oppose a prohibition on the unlicensed use of their own, or their children’s, likenesses seems low to the point of futility. We’ll see what comes, but by my lights, No FAKES is destined to become law.


Image by: nikolay100