Site-blocking: can the U.S. finally get it done?

site-blocking

The Motion Picture Association (MPA) has prefaced a renewed interest in site-blocking legislation to combat piracy. Will things be different this time?

When the internet industry killed the antipiracy bills SOPA and PIPA in January 2012, I was a newbie blogger but guessed at the time that those parties had totally blown their wad on that campaign. First, there was the boy-who-cried-wolf limitation suggesting that Google & Co. had deployed too much hyperbole to ever again sound the “death of the internet” alarm. Next, the general belief that “the internet” is inherently a force for good was a notion that waned perceptibly after 2012 and then fell off the cliff circa 2016. Today, neither the general public, the government, nor the press fawns over the “white knights” of Silicon Valley as they did when those bills were scuttled.

None of that addresses the fact that the “Stop SOPA” campaign was a tidal wave of disinformation, but it would be naive to think that facts would win today any more than they did twelve years ago. When new site-blocking proposals begin to make headlines, and the network of tech-funded groups howl BEWARE SOPA 2!!, it will be interesting to see whether the same, or similar, false talking points are effective in an environment that is more skeptical of Big Tech in general.

What is site-blocking and why do It?

Site-blocking today would probably work much as it was intended back in 2011. A complainant would have the burden to prove to a court that a platform is principally engaged in illegal activity (e.g., media piracy) and is operating outside the reach of U.S. law enforcement. With sufficient evidence, the complaining party(ies) would obtain an injunction to deny the platform access to the U.S. market. The basic mechanisms are not much more complicated than that, though we can expect the same network of “digital rights” groups to sharpen the rhetorical pitchforks and again stoke allegations that this process will “break” the internet or that it violates the speech right.

Of course, neither claim is true. Site-blocking is employed as a remedy throughout the democratic world where the internet still functions, and speech rights are not infringed (at least not because of site-blocking). There is no more a speech implication to blocking a criminal web platform from access to the U.S. than there would be by interdicting a cargo ship full of counterfeit electronics. On that subject, the need for site-blocking legislation today is more urgent than it was in 2011, and not just for movies and music.

Although the MPA et al. will naturally focus on sites illegally hosting and/or streaming pirated entertainment, establishing a broader rationale for site-blocking—i.e., getting past unfounded ideological opposition—will serve other business and private interests. Online predators of every type have continued to adapt since 2012; evidence shows that media piracy is integrated with a broad spectrum of cybercrime; and the U.S. lags behind the EU et al. in adopting this basic mechanism of protection.

For instance, small-business owners making creative products sold on eCommerce platforms lack the resources to combat, or the margins to absorb, the pace of counterfeiting by foreign actors. Advancements in small-batch production methods and drop-shipping offer new flexibility for counterfeiters to flood the U.S. market with cheap knockoffs, harming both legitimate producers and consumers. Meanwhile, media pirate sites are delivery platforms for malware used for cyber extortion (including sextortion), identity theft, and direct theft of private and sensitive material from personal and business networks.

So, although the MPA will likely be the most prominent advocate of site-blocking legislation, there are many disparate parties—from small-business owners to advocates fighting online sexual abuse—who may see the value in the U.S. finally adopting a remedy the EU et al. have had in place for a decade or more.

An Interesting Moment

In 2011, it was easy to spread the message that site-blocking was only about “Hollywood” protecting its wealth to the detriment of speech on the internet. It wasn’t true then, of course, but it will be interesting to see whether some form of the same rhetoric will gain traction in the coming years. Specifically, a whole generation has grown to young adulthood since then—kids who never heard the proverbial boy cry wolf the first time. Notably, Torrentfreak reports that GenZ exhibits a high rate of pirate site access, citing familiar rationales that streaming subscriptions are too expensive and/or that interest in one title militates against subscribing to the necessary channel.

But what will really be interesting to watch over the next few months will be GenZ’s susceptibility (or not) to the “Save TikTok” campaign already underway. On April 24, President Biden signed a solidly bipartisan law stating that TikTok will be banned in the U.S. unless, within nine months, owner Bytedance sells the platform and, thereby, severs all ties to the Chinese Communist Party (CCP). Bytedance, in addition to vowing it will fight the law in U.S. courts, has already launched a PR campaign, including social media messages that will tap into the same emotional triggers used during the “Stop SOPA” campaign.

As Google & Co. did in 2011/12, Bytedance will use its addictive interface to promote the message that its business interests are synonymous with their users’ rights, only this time, the rhetoric isn’t coming from Big Tech filtered through the Electronic Frontier Foundation—it’s a psyop of the CCP. The efficacy of the “Save TikTok” campaign will be telling, not only about the viability of site-blocking legislation, but about the hoped-for savvy that “digital natives” might reveal about navigating the perils of cyberspace.

In 2011, it was frightening to watch the platforms use the insidious power of the platforms to advocate the policy interests of the platforms. Now, that same playbook is being run by a foreign adversary targeting 170 million 18-29-year-olds, and it is an anxious moment, to say the least, waiting to see how they respond. Regardless, the underlying rationale for site-blocking is sound, and I hope that both copyright and non-copyright interests see it as a necessary protection of American enterprise and security.


Photo by: tommoh29

Pass the TikTok Legislation. And then…

TikTok legislation

“At what point then is the approach of danger to be expected? I answer, if it ever reach us, it must spring up amongst us. It cannot come from abroad. If destruction be our lot, we must ourselves be its author and finisher. As a nation of freemen, we must live through all time, or die by suicide.” – Abraham Lincoln, The Lyceum Address, 1838 –

Lincoln’s famous observation that only Americans can truly destroy America speaks to the fragility of the Republic, which the founders knew could only endure so long as the people generally keep faith with certain core principles. Watching those principles assaulted by a far-right populism, which has presently swallowed the Republican Party, it is natural to read Lincoln as prophetic, and it is hard to imagine any foreign influence being more dangerous. On the other hand, when Lincoln said, “It cannot come from abroad,” he could hardly have imagined a time when 170 million young Americans would carry a pocket surveillance device loaded with software under the control of a foreign adversary.

Following the 362-55 vote by the House to force TikTok to divest itself of all ties to the Chinese Communist Party (CCP), opinions about the bill question both its necessity and viability—though not with good reason. Although rashly described as a “ban,” the effect of H.R. 7521 would force a sale of the platform by parent company ByteDance to an owner without ties to the CCP. To that end, I agree with independent musician Blake Morgan. who endorses the TikTok legislation, both as a national security and anti-piracy measure. In an editorial for IP Watchdog, Morgan writes:

The vast majority of music on TikTok generates virtually no revenue for the musicians who made it, and even more music on the platform is completely unlicensed (stolen), copied (stolen via AI), or pirated (stolen). Simply put, TikTok is trying to build a music-based business without paying music makers fair value for the music. That’s why Universal Music Group has already pulled out of TikTok. That’s why the National Music Publishers’ Association has already announced it won’t renew its license with the company. So, TikTok poses “a clear and present danger” to American music, too.

The music piracy alone is reason to force the platform to operate within the reach of U.S. law, but with regard to the national security threat, it is notable that unless one is in the intelligence community, or a Member of Congress receiving a security briefing, we are left to rely upon one of those core principles, which have been eroded by social media in general:  trust. I do not endorse the Whatabouist’s view that just because TikTok is not alone in causing havoc that this legislation is moot, but the story does highlight those hazards of social media that make it difficult to convince many Americans that TikTok is a threat of any kind.

Joseph V. Amodio, writing for Tanium, states that TikTok is distinguishable from other platforms thus:

TikTok stands out in its power to manipulate: While videos from any app can go viral, TikTok’s infection ability is unique, given the practice of “heating,”  where TikTok staff can supercharge distribution of hand-picked videos. This has huge implications for fair competition and free trade. Just imagine how they can siphon profits by amplifying your competitors’ posts or cooling down your own viral campaigns.

Whether the goal of data manipulation is to pull the levers on enterprise, as Amodio indicates, or to influence young voters on policy matters, how does one convince nearly 200 million 18 – 29-year-olds that said manipulation is both occurring and should be seen as an attack? If an act of cyberwarfare entails hacking the Pentagon or shutting down part of the power grid, enough Americans can probably recognize such events as attacks in a traditional sense. Likewise, the prospect of malicious software injected into millions of mobile devices might be understood as a threat.

But what if the weapon is an insidious propaganda tool used to manipulate the opinions of millions of citizens? Who is going to be trusted to identify that as a sustained attack on the United States? Some portion of the TikTok demographic will not believe that China (or Russia) is an adversary in the first place, which is arguably evidence itself of social media’s power to influence.

Even if the delivery platform is owned by Meta serving “ads” purchased by foreign operatives with the same objective to sow discord, no individual wants to believe he’s being manipulated. More complexly, even if one tries to apply critical thinking, the effort itself is often countered by teams of data manipulators flooding the zone—i.e. the illusion of more “information” tilting bias in one direction or another. This was true before parties like China and Russia upped their cyber game and before they could add artificial intelligence to the toolset.

As a practical example at the heart of the TikTok story, how does the moderate, who would rather not hyper-politicize national security, take the contemporary Republican seriously in his professed opposition to TikTok’s capacity to “manipulate” Americans? For instance, Rep. Ralph Norman of South Carolina writes, “…if you’ve spent 5 minutes exploring TikTok, you should have recognized the addictive nature of this platform. It is designed for one purpose: to control your attention. Their algorithm quickly figures out what kind of videos you’re likely to watch, and then feed you similar videos to keep you fixated.”

Fine. But one could swap “TikTok for “Trump” and make the same general argument, including that his self-interested rhetoric about NATO, disrespect for the Constitution, etc. all comprise a threat to national security. What would Lincoln say to his legacy party about this tangled interplay between foreign and domestic forces, both hostile to American interests, and both weaponizing disinformation through addictive and manipulative platforms?

In this context, it is important to note that Trumpism is a symptom of populism—a trend that is no less prevalent on the left than on the right, perhaps especially among 18 – 29-year-olds. The difference, for the moment, is that the left has not found its own cult-like figure, who might also undermine core principles, albeit in a different style than Trump. The rise in populism in the U.S. and other democracies is a direct result of social media’s nature to factionalize hearts and minds, which is precisely what a foreign adversary wants to achieve. TikTok may be a shrewdly named time-bomb delivered to over half the U.S. population and, as such, should be diffused. But assuming that task can be accomplished, the existential question remains as to whether we can quarantine the most virulent effects of all social platforms or “die by suicide.”

Human Voice Gaining Protection in Confronting Generative AI

Voice

Last week, Tennessee passed the ELVIS Act to expand its statutory right of publicity (ROP) law to include voice as a protected aspect of an individual’s “likeness.” In response to artificial intelligence enabling more precise replication of specific, human sounding voices, it is little surprise that the music powerhouse state has taken swift action to explicitly include voice among the property rights protected by its ROP statute. With $9.7 billion output to the Nashville region alone by the music industry, Tennessee lawmakers took less than three months to introduce and pass the Ensuring Likeness, Voice, and Image Security (ELVIS) Act, and they could not have been luckier to have the acronym work so perfectly!

Tennessee’s existing ROP law already proscribed unlicensed use of “likeness” for a wide range of commercial purposes, and the ELVIS amendments create a civil action of potential liability for publication, performance, or transmission, or for making available an algorithm, software, tool, et al. with the primary purpose or function of producing an unauthorized “likeness.” This addition is notable because it creates a potential liability for the generative AI developer whose interest may be producing the next Mary Kutter song without Mary Kutter.

Although Tennessee is not the first state to include voice in the definition of “likeness” for the purpose of ROP law, the support from the music industry is indicative that the ELVIS Act is the first to directly confront the prospect of generative AI replicating artists without consent. We applaud Tennessee’s swift and thoughtful bipartisan leadership against unconsented AI deepfakes and voice clones and look forward to additional states and the US Congress moving quickly to protect the unique humanity and individuality of all Americans,” stated Mitch Glazier, chairman and CEO of the Recording Industry Association of America.

Widening the lens to all Americans and early proposals for a federal right of publicity, the prospect of generative AI being used either to replicate a “likeness” that is not yet recognizable; or to produce synthetic “performers” to displace humans are two challenges not easily addressed by traditional ROP doctrines. Historically, the application of these various laws is clearest when the “likeness” of a celebrity or public figure is used for commercial advertising or endorsement. For instance, non-famous persons, even in states with strong ROP statutes, have a higher burden to show reputational harm.

Thus, vesting a property right in one’s voice is a step in the right direction, but it is the various uses of a “likeness” leading to causes of action that get tricky. In its article about the ELVIS Act, Billboard cites a speech by president and CEO of National Music Publishers Association (NMPA) David Israelite stating that the much larger motion picture industry opposes a federal right of publicity. I addressed some of the reasonable concerns motion picture producers might raise with legislation proscribing the use of generative AI for “expressive purposes,” and wherever one leans on these questions, artificial voice exemplifies the difficult nature of adopting policies around generative AI in the creative industries.

As a general view, I stand with creators who see the potential for generative AI to displace human creators and maintain that there is nothing to be gained—culturally or economically—in a future creative sector with dramatically fewer professionals. But the ELVIS Act itself highlights the challenge of writing policy that looks beyond the current population of famous or semi-famous professionals. In this context, perhaps the audiobook narrators provide some insight. I’ve talked to several voice actor friends and colleagues in recent months, and after explaining why copyright doesn’t typically protect their interests and we turn to the subject of ROP, I then disappoint them further, explaining why those laws don’t quite address the prospect of scraping voice recordings to train a generative AI.

Award-Winning Book Narrator Encounters Her Virtual Self?

I recently spoke to audiobook narrator Hillary Huber, who discovered that her voice may be the unauthorized source of a Virtual Voice, a service provided to self-published authors on the Kindle Direct Publishing (KDP) platform. The Virtual Voice concept uses synthetic voice technology to enable the self-published author of a modestly selling title to create an audiobook she could otherwise not afford to produce. But Virtual Voice, a feature of Amazon+ Publishing, naturally begs two questions:  first, whose voices are used to train the AI? And second, is the model a harbinger of doom for professional book narrators throughout the industry?

Huber was alerted to the possibility of her vocal doppelganger by a friend sharing links to several books on the KDP platform and telling her, “This is your voice!”  But, as Huber explained to me, “Because our own voices never sound the same to ourselves as to others, I asked several colleagues to weigh in, and they were unanimous in their opinion that it was a version of me—not just the sound, but also certain markers like cadence and inflection.”

To my ear, which has not been trained on the more than 700 books Huber has narrated, I would describe the Virtual Voice sample as sounding either like a mediocre computer rendering of her, or like a recording of her voice with a computerized filter distorting the sound. The latter, of course, did not occur because Huber did not narrate the book in question, but whether Virtual Voice was “trained” without license using the voices of professional narrators like Huber and her colleagues is a question worth asking.

More broadly, as a matter of law and policy, the book narration business is perhaps instructive to other creators, including other voice actors, musical performers, et al. One difficulty, it seems, lies in distinguishing among the unknown, the semi-famous, and the famous, and Huber confirmed for me that the book narration world is indeed segmented into these three strata. Many unknown narrators earn modest incomes recording a broad range of modestly selling audiobooks; a small group of regulars like Huber can earn middle-class incomes reading more popular books; and, of course, celebrities are occasionally paid whatever they can negotiate to read bestsellers. Naturally, it is the narrator whose name and voice may not be widely recognizable, even among avid book listeners, who is most anxious about the prospect of losing her job to generative AI.

Additionally, when I asked Huber if she knew how many narrators are in her group I called the “recognizable regulars,” her guess was a surprisingly low number, well below 100 narrators. I figured the number would be small, but not that small, and this raises real concerns about the narration business. For one thing, Congress isn’t motivated to protect a handful of jobs. For another, even if the number were a few hundred voices producing a training dataset of, say, one-million popular books, that seems like a comparatively light task for a generative AI developer to create enough variety in synthetic voices to replace the narration workforce.

In that regard, while it may be tempting for some book narrators to license the use of their voices for a purpose like Virtual Voice, it is impossible to see how this does not very quickly obviate the need for any human narrators to produce audiobooks, or even license their voices for generative AI for long. At a certain threshold, the AI is expected to self-train, suggesting that a handful of narrators might obtain licensing deals one time and then nobody will ever do so again.

Assuming that’s a fair summary, some might ask why Congress should consider a provision like the ELVIS Act as a starting point for a federal ROP law with an aim to protect more than today’s musical performers. In my view, the answer goes back to considering future generations of creators. If there is one consistent feature in Big Tech’s influence on the creative sector, it is that the major platforms developed thus far are highly effective at cannibalizing existing works of great value while shrinking opportunities for new creators at every level.

If the U.S. is going to continue to foster new generations of professional creators, it is necessary that policy in this area does not focus too narrowly on the current population of recognizable and famous creators. Here, although copyright law does not apply to the property rights in “likeness,” its foundational purpose to “promote progress” might serve as a guiding principle in crafting new federal laws that vest property rights in our images, names, and voices.


Photo by: Andrew282