Thaler Asks the Court to Make Copyright Policy

Thaler

On October 30, counsel for Dr. Stephen Thaler requested that the U.S. Supreme Court hold its Petition for Certiorari in Thaler v. Perlmutter until after the Court rules on the matter of the dismissal of Copyright Office Director Shira Perlmutter by the White House in May. As the letter states, “The Blanche and Slaughter cases consider whether Director Perlmutter, a named party in the matter for which Dr. Thaler filed a Petition, shall continue in her position at the Copyright Office. As such, it has significant relevance for the outcome of the instant matter, particularly because her termination appears to be related to her stance on copyright for works created by artificial intelligence, which is the focus of Dr. Thaler’s case.”

Notwithstanding the Court’s obligation to decide whether the President has the authority to remove the head of the Copyright Office, there is little more than rumor and assumption that Director Perlmutter was dismissed because of her “stance” on artificial intelligence. And even if she was dismissed on that basis, it should have no bearing on whether the Court will weigh Dr. Thaler’s legal arguments, which are not in conflict with Perlmutter, but rather with the history of copyright law.

Most importantly, the human authorship doctrine, which Thaler seeks to erase, is not a philosophy unique to the views of Director Perlmutter, and the question is entirely separate from those raised in the jurisdictional matters relevant to the Blanche and Slaughter cases. The Court has ample guidance to find that the human authorship doctrine is well-founded in both the statutory and history and tradition of copyright law, and it should decide whether to grant cert on that basis.

Instead, with his request to hold cert, Dr. Thaler implies that the Court should wait to see whether a new appointee, friendly to the interests of AI developers, might replace Director Perlmutter. But even if that will be the result of the Blanche and Slaughter decisions, the Court is aware that 1) the Copyright Office, in its advisory capacity, does not make copyright law; and 2) Thaler’s argument for omitting the human authorship doctrine would have significant statutory, case law, and constitutional implications irrespective of who leads the Office.

Especially after the Court’s decision in Loper Bright Enterprises, overturning Chevron deference, it seems inconsistent to argue that the leadership of an agency, which has never been accorded Chevron, is in any way determinative of the foundational question presented by Dr. Thaler. In my view, the Court should deny cert on the grounds that the D.C. Circuit ruled correctly, but if it agrees to hear the case, it should not be distracted by the notion that copyright’s core principles are mere matters of one party’s opinion.

Rescuing Democracy from Democratization

democratization

Over the weekend, I had the privilege of participating in the 11th annual Mosaic Conference, organized by the Institute for Intellectual Property and Social Justice (IIPSJ) and hosted by Suffolk University Law School IP Center. Founded by Professor Lateef Mtima at Howard University, IPSJ’s mission is to “…examine intellectual property law and policy—as well as the IP regime in total—to see where full participation of disadvantaged, excluded, and marginalized groups may need redressing.”

A number of subjects were raised that will inspire some future blogs, but in the meantime, the following contains my remarks about the folly of “democratization,” slightly edited for this format:

To quote Professor David Golumbia from his posthumously published book, Cyberlibertarianism:  The Right-Wing Politics of Digital Technology, he writes, “As a rule, ‘democratization’ appears to mean tearing apart institutions, regardless of their nominal functions, including institutions whose purpose is to promote or even embody democracy.”

This is a very difficult moment to talk about knitting people and nations together when the exigent forces are so obviously centrifugal. The historian Joseph Ellis uses that word centrifugal in his book The Quartet to describe the sentiments of the newly independent American states and their reluctance to form the union, and it is hard to believe that that era, when roughly 4 million farmers barely knew the world more than 30 miles beyond their homes, might be compared to our digitally and globally interconnected present. But in my view, Big Tech’s claim to want to “democratize” everything, beginning with cultural works protected by copyright, was and remains catalytic to the struggle we now face to rescue the common cause of democracy.

In the United States, as the republican foundations that even allow room for discussions about social justice are under attack, we confront an authoritarianism that we recognize from history paired with a threat of technological feudalism that is unprecedented. At the same time that civil rights hills attained decades ago must now be reclaimed, rapid technological advancements in artificial intelligence also present new potential modes of injustice, and that challenge has many IP implications.

A simple example I have used recently begins with a friend in medical law who predicts that an AI will soon be better at reading a diagnostic scan than a human radiologist. He’s probably right, and of course, such promises, like improved healthcare, animate the political rhetoric used to promote yet another era of laissez-faire tech policy in the name of undefined “innovation.” As Jaron Lanier wrote in 2010, “People will accept ideas presented in technological form that would be abhorrent in any other form.”  I think this captures why the word innovation is allowed to sweep a million sins under a million rugs.

My friend’s medical example begs critical questions about who will own that technology in a winner-take-all market that often stifles competition, and, therefore, whether the tech will improve healthcare for more people or fewer and on what terms. Alternatively, while AI diagnostic tools might improve the quality of care for the few, will AI actuarial tools be used to deny access to the many? Of course, patent law, about which I know very little, will play a substantial role in the many questions implied by the medical example.

But in a copyright context, Silicon Valley, with the help of far too many IP academics, promoted the “democratization” of access to, and use of, cultural works via the allegedly free platforms. This egalitarian rhetoric was so appealing that even many professional creators echoed the sentiment and bought into the promise of working around traditional gatekeepers and forging more “organic” connections with fans. Today, fewer professional creators fare as well as their “pre-democratized” forerunners.

In that PR campaign funded by Silicon Valley, the making available rights and derivative works right in particular were portrayed as anachronistic principles exclusively serving Big Media “landlords” controlling all culture and information. And while I might join certain criticisms of Big Media, especially consolidation of the industry, the “landlord” metaphor was and still is applied even to the independent artist who might presume to enforce her copyright rights.

More broadly, the underlying hypocrisy of this rhetoric is that “landlord,” of all words, is a far more apt description for the owners of virtual real estate, where information does not flow freely but is manipulated by algorithms designed to maximize and monetize even the most toxic forms of engagement. And of course, this includes both rampant copyright infringement and legal uploads of works that have now been harvested for the purpose of training artificial intelligence.

With generative AI, Big Tech—again with the help of many in IP academia—now promotes the alleged value of “democratizing” the production of works, finally revealing democratization as the anti-humanist and, therefore, anti-democratic term that it truly is. We have several current examples in amicus briefs, academic papers, and even one court’s opinion in the Bartz case, in which parties argue that mass production of material by machines somehow fulfills the original purpose of copyright law. For those following Thaler v. Perlmutter, Dr. Thaler’s recent petition for cert at the U.S. Supreme Court argues that the Copyright Office’s affirmation of the human authorship requirement “defies the constitutional goals from which Congress was empowered to create copyright, namely, the creation and dissemination of creative works.”

This is wrongly stated, but the attempt to undermine the human authorship doctrine is, of course, consistent with Big Tech’s ideological view that individual human agency is an outdated nuisance—a bug to program around in pursuit of a grand, tech-utopian dream. Or to put it another way, the scorn for human authorship is in harmony with Mark Zuckerberg recently proclaiming that the future of companionship is one in which we have more robot friends than human ones.

Long after the dust settles on the legality of AI model training with protected works, fundamental questions of social justice in a world with generative AI will need to be addressed. In addition to many examples in which these products are already causing social harm—most acutely adverse psychological effects among children and teens—generative AI can potentially swallow, or perhaps smother, economic opportunities for diversity of expression, perhaps even accelerating the current trend of government censorship.

In that regard, I find it astounding that the copyright skeptics in academia, generally aligned with the political left, promoted democratization by portraying copyright as a tool of censorship rather than as a mode of empowerment for authors. While the free market is not a perfect answer to all challenges, the spike in sales of Art Spiegelman’s Maus after it was banned in 2022, or even the market’s response forcing the restoration of Jimmy Kimmel are, in my view, examples of why the speech right and copyright more often act in concert as a force for democratic principles.

Notably, the IP skeptics have inveighed against strong copyright rights by arguing social justice principles, as if, for instance, the right of access without copyright’s boundaries is the moral equivalent of the right to read campaign now confronting real censorship. Moreover, social justice for the artist is often omitted by that school’s overstating a purely utilitarian foundation for copyright. Not only is that perspective belied by history, but it seems to me that for an IP regime to encompass social justice values, some natural rights principles must apply.

In fact, in this light, I think it is noteworthy that rather than pursue a federal publicity right in response to AI’S potential to replicate anyone’s likeness, the NO FAKES Act currently before the U.S. Congress borrows principles from trademark, copyright, and right of publicity to create a novel IP right in one’s voice and likeness. Perhaps this moves the U.S. one step closer to some of the moral rights principles that animate copyright law in other countries.

It is no surprise that the tech industry so aggressively attacked intellectual property rights by selling the chimera of “democratization.” IP rights, at their best, foster an expansive and diverse world of competing ideas, whereas Big Tech’s interests—and the interests of authoritarians—are best served by organizing people into bunkers of competing realities. This epistemic crisis, I firmly believe, explains the wanton destruction of so many democratic institutions. And with generative AI, of course, it is easy to see how mass automation of synthetic material, posing as creative and informative works, is likely to exacerbate this problem.

Democratization is a beguiling term that no longer describes movement toward democratic forms. It exploits the language of democracy to mask an ideological contempt for democratic institutions and individual agency. It is a centrifugal force driving people, communities, and nations apart—a path to social, economic, and political anarchy, where bullies win and justice does not exist. Consequently, I would ask those in IP academia to be vigilant about the distinction between democratization and democracy and to push back on the rhetoric of the former in the hope that we can still rescue the latter.

AI Works Do Not “Compete” with Works of Authorship

"compete"

Many arguments advocating the view that AI training does not conflict with copyright rights  share a common fallacy, namely that AI outputs represent “competitive” works that copyright law was intended to promote. This error appears in Judge Alsup’s opinion in Bartz et al. v. Anthropic AI, in a report published by AI Progress, and in an amicus brief filed by three law professors in Thomson Reuters v. Ross Intelligence.

The competition fallacy rejects the notion of “market dilution,” which may be a novel, but not unfounded, consideration under factor four of the fair use analysis. Traditionally, the fourth factor inquiry considers whether the particular use of the work(s) in suit might potentially harm its/their market value. The question does not ordinarily weigh harm to, say, all sound recordings by virtue of having scraped all sound recordings to produce a machine that makes different sound recordings. Because the dilution principle would strongly disfavor AI developers, its proponents seek to portray the outputs as “competitive” works envisioned by copyright law.

As a threshold principle, although authors may be said to be in “perfect competition” or non-competition with one another, copyright’s purpose is not to promote competition but to promote as much diverse expression as authors may be inspired to create. Notwithstanding the use of AI as tools of human expression, it is an error to refer to AI outputs in general as “works of expression,” “works of authorship,” or any term of art that seeks to portray purely machine-made outputs as an intended consequence of copyright.

The inapt use of these terms perhaps indicates a hope that courts won’t notice the omission of the human authorship doctrine. But so long as that doctrine is affirmed (and it should be), we should only refer to AI outputs by other terms—choose the pejorative “slop” or the neutral “material” as you wish—in order to place outputs in proper context to copyright law. As argued here several times, if the material at issue is not protected by copyright on the basis that it is not made by a human, then its existence cannot be described as a “work” incentivized by copyright.

Judge Alsup’s Error in Bartz et al. v. Anthropic AI

Although the Bartz case itself is settled and will not be appealed, the reference to “competition” made by Judge Alsup will probably be litigated again in one or more of the many active AI training lawsuits. In his opinion, he wrote…

…Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act.

In addition to buying into the anthropomorphic comparison between machine learning and human education, Judge Alsup’s hypothetical “explosion of competing works” set off an explosion of criticism, including by Judge Chhabria of the same circuit, ruling in Kadrey et al. v. Meta. His response states…

…when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take.

I agree with this critique though, even here, would prefer not to see the word “competing.” Competition is generally creative whereas market dilution is generally destructive and closer to describing GAI’s effect on works of authorship and on copyright law. In fact, Judge Chhabria opines in Kadrey that, “As for the potentially winning argument—that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution—the plaintiffs barely give this issue lip service.” This kind of signal that the market dilution theory has legal foundation is why I believe its critics rely on the competition fallacy.

The Report by AI Progress

The report titled AI Models: Addressing Misconceptions About Training and Copyright, written by Anna Chauvet and Karthik Kumar, PhD, engages in the competition fallacy, albeit in a context I tend to find baffling. I say this because the report first presents an in-depth technical argument as to why AI training does not entail infringing conduct but then devotes equal effort arguing that model training is fair use.

If this document were a legal response in court, not presenting a fair use defense would likely be malpractice, but as an experts’ report, the fair use discussion casts doubt on the scientific rationale for non-infringement. Where there is truly no basis for infringement, there is no reason to mention fair use. Yet, in rejecting a consideration of market dilution under factor four, the authors of the report reprise the competition fallacy thus:

If a new work does not use protected expression, it does not matter whether it competes in the same genre and market as prior works. An increase in competitive creative works is precisely the growth of creative expression that the Copyright Act was intended to promote.

Notably, the authors rely on traditional fourth factor jurisprudence in the first sentence but seek to foreclose any consideration of AI’s novelty by mischaracterizing its outputs in the second sentence. The authors err by referring to the mass outputs of a GAI as “creative works” at all, let alone as the type of works intended to be promoted by the Copyright Act. As stated in an earlier post, I believe the courts should recognize that GAI lacks any technological precedent and, therefore, should not demur to plow new ground in considering market dilution as a destructive consequence worthy of deep consideration.

Further, it is concerning when any party implies that the AI outputs do not matter in considering whether the training process is fair use. This is nonsensical and inconsistent with case law. The courts absolutely consider the specific utility of technologies that potentially infringe copyright rights, and it is impossible to weigh the purpose or market effect of an AI product without considering its outputs. After all, the outputs are its purpose.

The Professors’ Brief in Thomson Reuters v. Ross

Law professors Brian L. Frye, Jess Miers, and Mateusz Blaszczyk filed a brief in Thomson Reuters v. Ross, principally to argue that the headnotes copied from Westlaw are not properly subjects of copyright. Here, I will set that question aside, and frankly, whether the courts find the headnotes to be sufficiently original for protection is not particularly relevant to the challenges posed by AI.

In the latter part of the brief, though, the professors reprise the competition fallacy, stating, “The problem with the dilution theory is that producing similar, but noninfringing works is precisely the kind of competition copyright is supposed to promote.” Again, this statement is legally correct but factually misleading. If the professors want to argue, as they do, that the Federal Trade Commission et al. err by advancing a market dilution theory based on unfair competition law, perhaps that debate is worth having. But general statements that AI outputs, as non-works of authorship, inherently fulfill the intent of copyright law are flatly wrong. The brief continues…

The Act seeks to promote the creation of original works of authorship, not to protect authors against competition. Indeed, it is axiomatic that the purpose of copyright is to benefit the public by encouraging marginal authors to produce and distribute additional works of authorship.

Copyright does not protect authors against informal competition with one another, but as stated, that has nothing to do with “competing” with machines that output non-works by non-authors. As for the reference to marginal authors, this is both misstated and misguided. First, the Copyright Act is agnostic as to which authors become popular and which ones remain “marginal.” Second, as is always the case, it is the independent authors who are more likely to be marginalized into oblivion by unregulated, unethical, and unlicensed AI products.

There are several briefs filed in Thompson Reuters by many familiar names in anti-copyright circles, and no doubt, they all repeat some variation on the competition fallacy. But copyright law exists to incentivize human beings to devote time, talent, and energy to the production and distribution of creative and informative works. Copyright does not exist to mass-produce material, content, slop, or stuff by any other name that lacks creative expression by humans.

Mistakenly portraying the outputs of GAI as generally “competitive” with works of authorship produces a cascade of doctrinal errors that swirl in eddies of circular logic around the pillar of the fourth fair use factor. The courts should decline to be dragged into that vortex and, as Judge Chhabria at least implied, they should be willing to consider the diluted streams of creativity that can result from wanton use of AI.


Photo by Fizkes