Maybe Now, Copyright Critics Know What Censorship Looks Like

censorship

Twelve years ago, when I first engaged in copyright advocacy, I was surprised to discover how many critics argued that copyright rights conflict with the speech right. Initially, I thought this had to be a fringe, internet thing—a vibe cooked up in the adolescent blogosphere that no legal scholar or expert took seriously. It would seem obviously contradictory to believe that any creative professional opposes the speech right. But no. It became clear that the main theme underlying the anti-copyright agenda—from academia to “digital rights” organizations to Techdirt et al.—was the premise that copyright rights are a means of censorship that should be minimally tolerated, if they are tolerated at all.

To support this view, and especially with regard to enforcing copyright rights online, it was apparently necessary to vilify creators as elitist, greedy, lazy, and even untalented individuals who expected society to pay for their “hobby.” Artists are used to this kind of criticism, historically from ultra-conservative voices, but the allegedly “democratizing” promise of the internet convinced many traditional liberals, and liberal organizations, to parrot this same anti-creator rhetoric.

Those familiar pejoratives are being recycled today by AI developers claiming that their products are just too damn important to let elitist, greedy, lazy creators stand in the way of machine learning. But let’s pause the AI skirmish a moment and back up. Because we should not lose sight of the fact that the original premise—that copyright rights conflict with speech was 1) bullshit; and 2) dangerous bullshit.

I lost count of how many posts, blogs, articles, and academic papers I read and/or rebutted trying to claim that copyright enforcement was making information, criticism, or important new expression disappear. None of those claims have been borne out by evidence, but more insidious was the fact that those who advocated the copyright-is-censorship theme were obscuring what real censorship looks like and, worse, feeding the very mechanisms by which true censors might come to power.

And come to power they have. As the Trump administration and likeminded state officials attack a wide spectrum of both creative and informative speech, will the anti-copyright crowd acknowledge how ridiculous their claims were that authors and publishers were ever the censors? No they will not. Will they acknowledge that the rights of authors are among the constitutional rights being trampled in Trump’s stampede toward national illiteracy? No they will not. Because it ain’t the authors and publishers trying to “memory hole” history. And it was ridiculous to suggest that they ever were.

But worse than the absurd premise that creators’ rights were a meaningful tool of censorship is that the anti-copyright narrative was promoted with substantial funding by the same companies whose technologies were destined to be exploited by the civil rights-infringing kakistocracy that now holds power. This was not just foreseeable; it was almost inevitable. As cited in my last post about the book Careless People, Sarah Wynn-Williams’s description of various authoritarians, including Trump, using the Facebook algorithm to micro-target disinformation is as unsurprising as it is shocking. What the hell did anyone imagine was really financing these “free information” machines? Goofy memes and mash-up videos?

Every time Mark Zuckerberg rebutted the idea of content moderation by saying, “We don’t want to be the arbiters of speech,” he was masking the truth that Facebook would take anybody’s money and guide them to effectively aim any misinformation at any parties for any purpose. It didn’t matter if the narrative was Brexit, the CCP spying on its own citizens, rallying Buddhists into murderous rage in Myanmar, or amplifying every delusional, unconstitutional syllable in Trump’s slow insurrection against the United States. The mantra of yellow journalism was If it bleeds, it leads, but the mantra of social media is If it pays, it stays.

Not that the anti-copyright crowd would ever admit they had anything to do with the damage Trump is doing to the Republic, but at least they might now concede that their claims about copyright making “information disappear” were as unworthy of attention as they were unfounded in fact. As Justice Sandra Day O’Connor famously wrote in Harper and Row v. Nation Enterprises, “The Framers intended copyright itself to be the engine of free expression.” And so it has been. Meanwhile, the tech industry that opposes those rights has proven to be an engine of so many calamities the Framers dearly hoped Americans would avoid.


Photo by Treephwood

Where Are All the Trolls at the CCB?

A lot of world-shaking events have occurred since 2018, when the CASE Act was introduced for the purpose of creating a small-claim copyright alternative, now known as the Copyright Claims Board (CCB). After a pandemic, an attempted coup d’ etat, and other jaw-dropping moments, it’s easy to forget all the ululating noise produced by the Electronic Frontier Foundation, Fight for the Future, Public Knowledge, Mike Masnick, the Niskanen Center, Sen. Wyden, and Computer & Communications Industry of America, et al. to warn the public about the perils of the CCB. The loudest talking point in that cacophony was the unfounded prediction that the small-claim tribunal would be an ideal forum for copyright trolls. For example…

“The CASE Act would give copyright trolls a faster, cheaper way of coercing Internet users to fork over cash “settlements,” bypassing the safeguards against abuse that federal judges have labored to create.”  – EFF, April 2018 –

A “copyright troll” is an attorney who consistently files questionable or unmeritorious claims with the intent to extract settlements from alleged copyright infringers. In response to predictions that the CCB would be a perfect venue for trolling, I and others responded by highlighting the many safeguards in the CASE legislation that were written specifically to anticipate and prevent abuse of the tribunal. In fact, that EFF quote above was a double lie because safeguards against abusive or unmeritorious claims do not easily prevent trolling in federal court, which is why trolling happens in those venues, although not nearly so often as the anti-copyright hecklers like to claim.

CCB Safeguards Triggered for the First Time

As Jonathan Bailey describes in a recent post on his blog Plagiarism Today, the CCB has, for the first time, invoked its authority to bar an attorney from filing small claims for one year. To be clear, based on Bailey’s description, the attorney in question does not deserve the description “troll,” let alone the kind of predatory actor copyright hecklers refer to when they use that term.

Instead, this attorney triggered the safeguard provisions by filing several unmeritorious claims against Amazon, which was improperly named, and foreign resellers, which cannot be named in CCB claims. As Bailey notes, the effort is understandable because, “Many creators have complained that marketplaces like Amazon, Wish, Temu and so forth have become havens for infringement.”

My point here is not to comment upon or critique this one attorney’s intentions or errors, but to emphasize that the sanctions he activated at the CCB are the same safeguards written to prevent copyright trolls from even using the tribunal, let alone abusing it. As noted in this post, the CCB is a cost-prohibitive venue for the would-be troll due to the limited number of claims that may be filed in a single year, the potential fines for intentional abuse, and the possibility of being barred from the CCB for a year.

During the roughly two years between introduction and passage of the CASE Act, a typical response to the statutory safeguards was, “Well, we can’t trust the Copyright Office.” This familiar, dimwitted tactic is indistinguishable from those who say “We can’t trust the DOJ” in response to meritorious indictments against the former president. Meanwhile, the CCB, in demonstrating that it will enforce safeguards as the law requires belies all those scary headlines predicting that sharing memes on social media would result in a tidal wave of $30,000 fines.

The anti-CASE messaging has since evaporated into the digital ether, of course, but at moments like this, I think it’s fair to say that every time these same hecklers predict anything about copyright law, they should be ignored. I don’t mean that their views should be heavily scrutinized. I mean ignored. They lie about basic facts. They use fearmongering as a primary tactic. They claim to represent interests they do not represent. And they battle chimeras to stay relevant and raise funds. On that last point, expect to see the EFF look for an opportunity to litigate the constitutionality of the CCB—an effort that will likely fail but, as I say, will make good material to promote with a “Donate Now” button.

Before Generative AI, Big Tech Taught Artists to Abdicate Copyright Rights

One of the more challenging aspects of copyright advocacy is the fact that many artists and creators are conflicted about enforcing their own rights, and from observation, the disconnect is ideological. For the last 30 years, copyright skepticism has been woven into political narratives rooted in criticism of corporations and the excesses of capitalism—popular themes among the political left, which encompasses most artists. Now that generative AI developers are turning creative works into “pink slime,” and artists are suddenly more interested in their rights, it might help to recognize that the industry deploying AI is the same one that taught creators to advocate against copyright in the first place.

The year 2011 was an extraordinary time to jump into the fray. It was immediately apparent that allegations of “copyright maximalism” were deeply intertwined with a sincere and animated belief that the internet would foster a new and potent form of direct democracy to confront a litany of injustices. Copyright enforcement was characterized as a barrier to that promise, and so, the Stop SOPA campaign (to kill anti-piracy legislation) became part of a larger, frenetic collage that included OWS protests, European pirate parties, Anonymous, Wikileaks, etc., all feeding an atmosphere of revolution that corresponded with headlines and memes claiming that “Hollywood” wanted to use copyright to break the internet and stifle speech.

But Big Tech’s promise to democratize everything was a Trojan Horse from which the AI bots have now emerged to ransack the village. Not only did promoters of the “free flow of information” elide the fact that their platforms were as likely to produce the January 6th insurrection as the “Pussy Hat” March, but the allegation that copyright was a barrier to information flow had nothing to do with liberating our speech and everything to do with limiting their liability.

Every time members of the creative community echoed the anti-copyright messages pumped out by Fight for the Future, the EFF, Public Knowledge, or the platforms themselves, what was really being advocated was a lack of accountability for online service providers. I never fully understood how one of the most exploitative industries in history managed to turn anti-corporatist sentiment to its advantage, but I assumed it was the gestalt of the internet. The illusion that social platforms belong to the people was a charade that enabled Google, Facebook, et al. to camouflage their interests as our rights.

That theme has aged about as well as the tobacco industry’s efforts to sell freedom to get smokers to ignore cancer, but it’s been almost two years since Big Tech’s “Big Tobacco moment,” and little has changed. Neither in Congress nor the courts have online service providers been held accountable for much of anything—and that’s with laws on the books. When we consider that, for almost three decades, the major platforms have acted in bad faith with their end of the DMCA bargain, and the courts have interpreted Section 230 as an unlimited liability shield, it is hard to feel hopeful about a legal framework for accountability for harms resulting from AI.

In fact, certain AI tools (e.g., LLMs) may imply a wider “neutral” buffer between potentially harmed parties and potentially liable parties. “Knowledge” and “intent” are key factors in establishing liability, and we have watched Big Tech play shell game with the concept of what they can “know” or “intentionally” control about activity on their platforms. AI tools could take these shenanigans to the next level, enabling new forms of harm with an even weaker nexus linking the machines to the people who design and operate them.

In the copyright world, platform operators have consistently circumvented their obligations under the DMCA with shrugging statements like We can’t police the internet, alluding to staggering volume while conjuring an association with authoritarianism. Now, the circumstances are different. It is a near certainty that every creative work made has been, or will be, ingested into one or more AI training models, and unless the courts find this to be an act of mass piracy and order disgorgement of the datasets, creators may have to accept that their work is being turned into pink slime.

While it is encouraging to see artists take a more active interest in copyright rights as a response to AI, it is also a bittersweet transition in light of all that has happened so far. Whatever comes next, I hope the creative community will recognize that copyright rights are the closest thing to labor rights the independent artist has. And these rights should not be weakened or abandoned for the sake of more billionaires making false promises about democracy and free speech.