It Looks Like the EFF is Pro FAKES

FAKES

When it comes to cyber policy and anything like intellectual property, the Electronic Frontier Foundation’s critiques are so predictable, they might as well use ChatGPT to write their blog. For instance, in opposing the NO FAKES Act, an April post by Corynne McSherry selects items from that same menu of responses EFF has used to oppose any form of online copyright enforcement. In this instance, she orders up the following:  pretend to want a “better” bill; cite scary hypotheticals; pretend to care about creators; and, of course, insist that the speech right is in jeopardy.

For review, the NO FAKES Act would establish a new property right in every individual’s likeness, including one’s voice. As opined on this blog, its mechanisms comprise a thoughtful response to a novel challenge—namely the ability of just about any party to use generative artificial intelligence (GAI) to replicate the likeness of any person. The hazards of replication are obvious to the common-sense observer—from intensifying disinformation to commercial uses without permission to sexual predation, scams, and harassment. But as usual, the EFF advocates the interests of the tech industry by framing its critiques in a rhetoric that sounds pro-individual or even (ha!) pro-artist.

McSherry’s broadside at NO FAKES employs the tactic of alluding to hypothetical negative consequences, which Congress has (of course) failed to consider. Thus, EFF insists, as it did with bills like the CASE Act, that NO FAKES, as written, should be balled up, and that Congress should start over from scratch. But those of us familiar with the organization recognize that this pretense is there to mask the EFF’s view that the whole idea of a likeness right should be scuttled. If past is prologue, the EFF will never endorse any version of a law to remedy unlicensed AI likeness replication and, possibly, never engage as a good-faith negotiator on the subject.

Predictably, McSherry’s post elides important details about NO FAKES. I won’t unpack them all, but in one example, she writes, “The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies.” Of course, she doesn’t mention that although the 70-year term is the maximum, the likeness right would have to be renewed post-mortem and, similar to trademarks, renewal is conditioned on showing that the likeness is still in “authorized public use.”

But it was another paragraph that struck me as vintage EFF—an implication that existing right of publicity (ROP) laws in the states are already harmful to speech and that NO FAKES can only make matters worse. McSherry writes:

… it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songsmagazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died.

But following the links in that first paragraph, one finds a couple of unmeritorious claims of ROP along with a couple of EFF’s opinions about how ROP law should be applied in context to the speech right. In the first instance, weak cases that do not prevail only disprove the allegation that a law has “reached beyond” its intent. And in the second instance, while the EFF is entitled to its opinion, its interpretation of the speech right is so expansive that it is unremarkable when the courts so often disagree with their positions.

In fact, the EFF’s overbroad concept of the speech right is one reason I say it is being disingenuous in asking for a revised likeness bill. NO FAKES arguably provides better guidance on the use of AI replicas for protected speech than ROP case law, but although McSherry acknowledges these provisions, she states, “…interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.” That’s code for “let’s not have anything like this law” because of course all laws need to be interpreted and, yeah, lawyers are usually involved.

Another hypothetical includes the familiar, and laughable, implication that the EFF cares about creators and performers…

People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image.

While it is true that an individual could over-license the use of her likeness to another party, this is no different than licensing traditional forms of intellectual property. That an owner might give away too much is a consideration of the owner’s savvy and legal representation, but not a rationale to oppose the right being established in the first place. This complaint is a rehash of the fallacy that copyright rights are bad because some parties have cajoled artists into signing over more than they should. As applied to NO FAKES, I suspect people will favor the right to control their own likenesses and then worry about licensing, if that becomes an issue.

Finally, McSherry’s post repeats the same old prediction that NO FAKES will lead to platforms removing some undefinable, yet unacceptable, volume of protected speech. It’s almost surprising that EFF remains committed to this message when social media is clearly overflowing with so much protected hogwash—and when the major platforms are increasingly taking down innocuous posts without any rationale or transparency. A casual review of the current state of “information” on social platforms can only support the rational prediction that AI generated likenesses will exacerbate the problem. At the same time, EFF’s claim to defend “new creavity” is overstated when even protected uses of AI likenesses are often little more than brief diversions of limited cultural, and no informational, value.

For every potentially legal use of AI likeness, there are dozens of ways for scammers, foreign adversaries, predators, and unscrupulous business operators to use the technology to cause serious harm. But, true to form, the EFF asks that we ignore evidence of the damage being done and imagine instead that any remedy must be worse than the disease. Just off the cuff, they’ve used similar tactics to be wrong about CASE Act, Section 1201, Section 230, site-blocking, and Controlled Digital Lending. So, it is hardly a bold speculation to say that they’re wrong about NO FAKES.


Image source by: maxxyustas

Podcast: AI and Voice Replication with Tim Friedlander

Friedlander

In this podcast, I talk with Tim Friedlander, voice actor, musician, and founder of the National Associaion of Voice Actors (NAVA). Tim joined me to talk about AI — its potential threats to his profession, his experience meeting on Capitol Hill, and his views on why this subject matters.

Contents

Voice
  • 00:32 – Tim’s background.
  • 03:07 – Political voiceovers.
  • 04:31 – Voice acting is acting.
  • 06:20 – About NAVA.
  • 10:25 – Size of NAVA and the market.
  • 12:35 – Experiences on the Hill.
  • 17:04 – Economic value of the market.
  • 18:53 – Resistance to the cause.
  • 21:46 – The challenge does not end with licensing.
  • 25:24 – What’s resonating on the Hill.
  • 28:55 – No FAKES Act.
  • 33:29 – Reasons why this conversation matters.
  • 40:15 – AI as a tool for creators.
  • 44:50 – Is it too late to respond?
  • 48:45 – The climate has changed for Big Tech.
  • 55:30 – No FAKES reprise.

Site Blocking Is Effective Worldwide Says New Report by IP House and DCA

site-blocking

Overseas and Out of Reach:  International Video Piracy and U.S. Options to Combat It, released today by IP House and Digital Citizens Alliance (DCA) is one more reason the U.S. Congress should adopt site-blocking legislation to protect American creators and consumers.

Thirteen years ago this coming January, Congress shelved bipartisan legislation that was designed to restrict foreign-based criminal enterprises from access to American consumers. Generally referred to as “site-blocking,” the focus was (and remains) combatting media piracy operators who illegally distribute or perform motion pictures, music, publications, etc.—most of which are produced in the United States. In 2011/12, Silicon Valley funded a multi-lateral disinformation campaign that frightened people into believing that site-blocking would chill speech, sidestep due process, and “break the internet.”

None of those allegations were true then, and if Congress revisits site-blocking, which it should, lawmakers can rely on the new IP House report showing that more than 50 countries have implemented some form of this piracy mitigation strategy without any of the negative consequences foretold by Big Tech and its network of hacktivists. Unsurprisingly, the report reveals that speech rights, due process, and functioning internet services persist in nations that have had site-blocking in force for about a decade or more.

Contrary to those who predicted that more access to more media would reduce piracy, Americans have more access today than they have time to consume, and yet piracy grew by 36 percent between 2021 and 2022, during which time, 13.5 billion visits to film and TV piracy sites originated in the U.S. Meanwhile, to those who claimed that site-blocking was too risky because piracy cannot be restrained, the report demonstrates that site-blocking measures have resulted in increased traffic to legal platforms for media entertainment.

Three separate studies—focused on the United Kingdom, Portugal, and Australia—found that when sites were blocked, traffic decreased to those sites. The decrease was substantial; traffic decreased by 89 percent in the United Kingdom, 70 percent in Portugal, and 69 percent in Australia.

Even if skeptics choose to doubt that, say, Russia is a reliable speech-right and due-process country, fair enough; but Australia, Canada, the UK, France, Germany, and Sweden are among the nations with site-blocking measures in force while reporting no harm to protected speech, the functioning of the internet, or the kind of indiscriminate over-blocking that Big Tech and its “digital rights” allies insisted would be inevitable. Of course, much of that hyperbole has ebbed amid a more sober understanding that the internet is not the boon to democracy Google et al. proclaimed. So now, perhaps, we can have a sober discussion about the rationale for site-blocking and how it is implemented.

How Site-Blocking Works

As the new report describes in detail, most sophisticated pirate platforms operate between the shadows of online anonymity, in physical jurisdictions beyond the reach of U.S. law enforcement, and “in concert with other criminal entities.” As a $2bn+ industry, these enterprises have the resources to build nimble, complex systems, and so, shutting down one of the major operations and/or convicting the owners is nearly impossible—even with cooperation among friendly nations. For instance, the infamous Megaupload founder Kim Schmitz (Kim Dotcom) was arrested in New Zealand in 2012, but it was only this past August when that government agreed to extradite him to the U.S. to stand trial.

In response to the challenge of stopping “out of reach” criminals, site-blocking prevents, or at least limits, foreign illegal platforms’ capacity to reach consumers in the target nation. To implement a block, a complainant party (e.g., a major owner of IP being infringed) bears a high burden of proof to show (in the U.S. it would be a federal court) that a particular site is dedicated, or substantially dedicated, to mass piracy. If the court orders a block, the major ISPs in that nation are then instructed to restrict access through various means like DNS blocking, blacklisting URLs, etc., depending on the nature and structure of the pirate operator.

Piracy is About Harm to Creators and Consumers

Nearly 80 percent of piracy sites delivered malware-ridden ads to their users….More than half of the $121 million generated ($68.3 million) from malvertising came from U.S. visits to these sites.[1]

Even if site-blocking were solely about mass theft of creative works, it is absurd that the U.S., as the world’s largest and most diverse producer of such works, lags so far behind other nations in adopting this commonsense strategy to mitigate harm to American businesses. But in addition to the new report’s evidence that site-blocking has been effective without significant negative consequences, Congress must also recognize that both media piracy and cybercrime in general have become more sophisticated in the last decade.

For instance, two of the more popular modes of media piracy are the video on demand (VOD) and internet protocol TV (IPTV) models whereby operators sell subscriptions to platforms that look like Netflix or Hulu, but which stream and/or enable downloads of media files that are obtained and stored illegally. Many consumers are aware that these sites are piracy-based, but because the platforms look and feel like legit platforms, many consumers may not be aware that they are paying criminal enterprises, making themselves vulnerable to cyber-attacks and/or supporting a broad range of unsavory activity, including extortion, narcotics, human trafficking, and terrorism.

Al-Manar is a Lebanese television outlet operated by the extremist political party Hezbollah and is banned from operating in the United States after the U.S. government labeled it a “Specially Designated Global Terrorist entity. Nevertheless, Al-Manar was offered on at least half of the piracy IPTV services…

Several DCA reports have presented substantial evidence of a nexus linking mass media piracy to organized crime, and as this new report states, “The more profitable piracy is, the more likely organized criminals are or will become involved in it.” Past reports have shown that platforms are major vectors for malware, including ransomware and remote access trojans (RATs) used to slave computers, or that visitors to pirate sites are “disproportionately vulnerable to credit card fraud.”

Meanwhile, it is hard to miss the fact that buying ordinary products online today, even on major ecommerce platforms, requires heightened vigilance to avoid counterfeits that may be faulty or dangerous.  Add to this chaos the potential of AI to amplify a broad range of assaults on American institutions, businesses and consumers, and it is clearly a moment for Congress to fan away the dust of the “Stop SOPA” campaign of 2012 and reaffirm that site-blocking is a practical tool in defense of the public interest. “The lack of evidence of abuse suggests that site-blocking orders are fair, rigorous, and issued only in legitimate cases of large-scale piracy,” the report states. That was predictable more than a decade ago. Time to catch up.


[1] IP House report citing this article.

Image source by: