It Looks Like the EFF is Pro FAKES

FAKES

When it comes to cyber policy and anything like intellectual property, the Electronic Frontier Foundation’s critiques are so predictable, they might as well use ChatGPT to write their blog. For instance, in opposing the NO FAKES Act, an April post by Corynne McSherry selects items from that same menu of responses EFF has used to oppose any form of online copyright enforcement. In this instance, she orders up the following:  pretend to want a “better” bill; cite scary hypotheticals; pretend to care about creators; and, of course, insist that the speech right is in jeopardy.

For review, the NO FAKES Act would establish a new property right in every individual’s likeness, including one’s voice. As opined on this blog, its mechanisms comprise a thoughtful response to a novel challenge—namely the ability of just about any party to use generative artificial intelligence (GAI) to replicate the likeness of any person. The hazards of replication are obvious to the common-sense observer—from intensifying disinformation to commercial uses without permission to sexual predation, scams, and harassment. But as usual, the EFF advocates the interests of the tech industry by framing its critiques in a rhetoric that sounds pro-individual or even (ha!) pro-artist.

McSherry’s broadside at NO FAKES employs the tactic of alluding to hypothetical negative consequences, which Congress has (of course) failed to consider. Thus, EFF insists, as it did with bills like the CASE Act, that NO FAKES, as written, should be balled up, and that Congress should start over from scratch. But those of us familiar with the organization recognize that this pretense is there to mask the EFF’s view that the whole idea of a likeness right should be scuttled. If past is prologue, the EFF will never endorse any version of a law to remedy unlicensed AI likeness replication and, possibly, never engage as a good-faith negotiator on the subject.

Predictably, McSherry’s post elides important details about NO FAKES. I won’t unpack them all, but in one example, she writes, “The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies.” Of course, she doesn’t mention that although the 70-year term is the maximum, the likeness right would have to be renewed post-mortem and, similar to trademarks, renewal is conditioned on showing that the likeness is still in “authorized public use.”

But it was another paragraph that struck me as vintage EFF—an implication that existing right of publicity (ROP) laws in the states are already harmful to speech and that NO FAKES can only make matters worse. McSherry writes:

… it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songsmagazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died.

But following the links in that first paragraph, one finds a couple of unmeritorious claims of ROP along with a couple of EFF’s opinions about how ROP law should be applied in context to the speech right. In the first instance, weak cases that do not prevail only disprove the allegation that a law has “reached beyond” its intent. And in the second instance, while the EFF is entitled to its opinion, its interpretation of the speech right is so expansive that it is unremarkable when the courts so often disagree with their positions.

In fact, the EFF’s overbroad concept of the speech right is one reason I say it is being disingenuous in asking for a revised likeness bill. NO FAKES arguably provides better guidance on the use of AI replicas for protected speech than ROP case law, but although McSherry acknowledges these provisions, she states, “…interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.” That’s code for “let’s not have anything like this law” because of course all laws need to be interpreted and, yeah, lawyers are usually involved.

Another hypothetical includes the familiar, and laughable, implication that the EFF cares about creators and performers…

People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image.

While it is true that an individual could over-license the use of her likeness to another party, this is no different than licensing traditional forms of intellectual property. That an owner might give away too much is a consideration of the owner’s savvy and legal representation, but not a rationale to oppose the right being established in the first place. This complaint is a rehash of the fallacy that copyright rights are bad because some parties have cajoled artists into signing over more than they should. As applied to NO FAKES, I suspect people will favor the right to control their own likenesses and then worry about licensing, if that becomes an issue.

Finally, McSherry’s post repeats the same old prediction that NO FAKES will lead to platforms removing some undefinable, yet unacceptable, volume of protected speech. It’s almost surprising that EFF remains committed to this message when social media is clearly overflowing with so much protected hogwash—and when the major platforms are increasingly taking down innocuous posts without any rationale or transparency. A casual review of the current state of “information” on social platforms can only support the rational prediction that AI generated likenesses will exacerbate the problem. At the same time, EFF’s claim to defend “new creavity” is overstated when even protected uses of AI likenesses are often little more than brief diversions of limited cultural, and no informational, value.

For every potentially legal use of AI likeness, there are dozens of ways for scammers, foreign adversaries, predators, and unscrupulous business operators to use the technology to cause serious harm. But, true to form, the EFF asks that we ignore evidence of the damage being done and imagine instead that any remedy must be worse than the disease. Just off the cuff, they’ve used similar tactics to be wrong about CASE Act, Section 1201, Section 230, site-blocking, and Controlled Digital Lending. So, it is hardly a bold speculation to say that they’re wrong about NO FAKES.


Image source by: maxxyustas

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)