Before Generative AI, Big Tech Taught Artists to Abdicate Copyright Rights

One of the more challenging aspects of copyright advocacy is the fact that many artists and creators are conflicted about enforcing their own rights, and from observation, the disconnect is ideological. For the last 30 years, copyright skepticism has been woven into political narratives rooted in criticism of corporations and the excesses of capitalism—popular themes among the political left, which encompasses most artists. Now that generative AI developers are turning creative works into “pink slime,” and artists are suddenly more interested in their rights, it might help to recognize that the industry deploying AI is the same one that taught creators to advocate against copyright in the first place.

The year 2011 was an extraordinary time to jump into the fray. It was immediately apparent that allegations of “copyright maximalism” were deeply intertwined with a sincere and animated belief that the internet would foster a new and potent form of direct democracy to confront a litany of injustices. Copyright enforcement was characterized as a barrier to that promise, and so, the Stop SOPA campaign (to kill anti-piracy legislation) became part of a larger, frenetic collage that included OWS protests, European pirate parties, Anonymous, Wikileaks, etc., all feeding an atmosphere of revolution that corresponded with headlines and memes claiming that “Hollywood” wanted to use copyright to break the internet and stifle speech.

But Big Tech’s promise to democratize everything was a Trojan Horse from which the AI bots have now emerged to ransack the village. Not only did promoters of the “free flow of information” elide the fact that their platforms were as likely to produce the January 6th insurrection as the “Pussy Hat” March, but the allegation that copyright was a barrier to information flow had nothing to do with liberating our speech and everything to do with limiting their liability.

Every time members of the creative community echoed the anti-copyright messages pumped out by Fight for the Future, the EFF, Public Knowledge, or the platforms themselves, what was really being advocated was a lack of accountability for online service providers. I never fully understood how one of the most exploitative industries in history managed to turn anti-corporatist sentiment to its advantage, but I assumed it was the gestalt of the internet. The illusion that social platforms belong to the people was a charade that enabled Google, Facebook, et al. to camouflage their interests as our rights.

That theme has aged about as well as the tobacco industry’s efforts to sell freedom to get smokers to ignore cancer, but it’s been almost two years since Big Tech’s “Big Tobacco moment,” and little has changed. Neither in Congress nor the courts have online service providers been held accountable for much of anything—and that’s with laws on the books. When we consider that, for almost three decades, the major platforms have acted in bad faith with their end of the DMCA bargain, and the courts have interpreted Section 230 as an unlimited liability shield, it is hard to feel hopeful about a legal framework for accountability for harms resulting from AI.

In fact, certain AI tools (e.g., LLMs) may imply a wider “neutral” buffer between potentially harmed parties and potentially liable parties. “Knowledge” and “intent” are key factors in establishing liability, and we have watched Big Tech play shell game with the concept of what they can “know” or “intentionally” control about activity on their platforms. AI tools could take these shenanigans to the next level, enabling new forms of harm with an even weaker nexus linking the machines to the people who design and operate them.

In the copyright world, platform operators have consistently circumvented their obligations under the DMCA with shrugging statements like We can’t police the internet, alluding to staggering volume while conjuring an association with authoritarianism. Now, the circumstances are different. It is a near certainty that every creative work made has been, or will be, ingested into one or more AI training models, and unless the courts find this to be an act of mass piracy and order disgorgement of the datasets, creators may have to accept that their work is being turned into pink slime.

While it is encouraging to see artists take a more active interest in copyright rights as a response to AI, it is also a bittersweet transition in light of all that has happened so far. Whatever comes next, I hope the creative community will recognize that copyright rights are the closest thing to labor rights the independent artist has. And these rights should not be weakened or abandoned for the sake of more billionaires making false promises about democracy and free speech.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)