Rescuing Democracy from Democratization

democratization

Over the weekend, I had the privilege of participating in the 11th annual Mosaic Conference, organized by the Institute for Intellectual Property and Social Justice (IIPSJ) and hosted by Suffolk University Law School IP Center. Founded by Professor Lateef Mtima at Howard University, IPSJ’s mission is to “…examine intellectual property law and policy—as well as the IP regime in total—to see where full participation of disadvantaged, excluded, and marginalized groups may need redressing.”

A number of subjects were raised that will inspire some future blogs, but in the meantime, the following contains my remarks about the folly of “democratization,” slightly edited for this format:

To quote Professor David Golumbia from his posthumously published book, Cyberlibertarianism:  The Right-Wing Politics of Digital Technology, he writes, “As a rule, ‘democratization’ appears to mean tearing apart institutions, regardless of their nominal functions, including institutions whose purpose is to promote or even embody democracy.”

This is a very difficult moment to talk about knitting people and nations together when the exigent forces are so obviously centrifugal. The historian Joseph Ellis uses that word centrifugal in his book The Quartet to describe the sentiments of the newly independent American states and their reluctance to form the union, and it is hard to believe that that era, when roughly 4 million farmers barely knew the world more than 30 miles beyond their homes, might be compared to our digitally and globally interconnected present. But in my view, Big Tech’s claim to want to “democratize” everything, beginning with cultural works protected by copyright, was and remains catalytic to the struggle we now face to rescue the common cause of democracy.

In the United States, as the republican foundations that even allow room for discussions about social justice are under attack, we confront an authoritarianism that we recognize from history paired with a threat of technological feudalism that is unprecedented. At the same time that civil rights hills attained decades ago must now be reclaimed, rapid technological advancements in artificial intelligence also present new potential modes of injustice, and that challenge has many IP implications.

A simple example I have used recently begins with a friend in medical law who predicts that an AI will soon be better at reading a diagnostic scan than a human radiologist. He’s probably right, and of course, such promises, like improved healthcare, animate the political rhetoric used to promote yet another era of laissez-faire tech policy in the name of undefined “innovation.” As Jaron Lanier wrote in 2010, “People will accept ideas presented in technological form that would be abhorrent in any other form.”  I think this captures why the word innovation is allowed to sweep a million sins under a million rugs.

My friend’s medical example begs critical questions about who will own that technology in a winner-take-all market that often stifles competition, and, therefore, whether the tech will improve healthcare for more people or fewer and on what terms. Alternatively, while AI diagnostic tools might improve the quality of care for the few, will AI actuarial tools be used to deny access to the many? Of course, patent law, about which I know very little, will play a substantial role in the many questions implied by the medical example.

But in a copyright context, Silicon Valley, with the help of far too many IP academics, promoted the “democratization” of access to, and use of, cultural works via the allegedly free platforms. This egalitarian rhetoric was so appealing that even many professional creators echoed the sentiment and bought into the promise of working around traditional gatekeepers and forging more “organic” connections with fans. Today, fewer professional creators fare as well as their “pre-democratized” forerunners.

In that PR campaign funded by Silicon Valley, the making available rights and derivative works right in particular were portrayed as anachronistic principles exclusively serving Big Media “landlords” controlling all culture and information. And while I might join certain criticisms of Big Media, especially consolidation of the industry, the “landlord” metaphor was and still is applied even to the independent artist who might presume to enforce her copyright rights.

More broadly, the underlying hypocrisy of this rhetoric is that “landlord,” of all words, is a far more apt description for the owners of virtual real estate, where information does not flow freely but is manipulated by algorithms designed to maximize and monetize even the most toxic forms of engagement. And of course, this includes both rampant copyright infringement and legal uploads of works that have now been harvested for the purpose of training artificial intelligence.

With generative AI, Big Tech—again with the help of many in IP academia—now promotes the alleged value of “democratizing” the production of works, finally revealing democratization as the anti-humanist and, therefore, anti-democratic term that it truly is. We have several current examples in amicus briefs, academic papers, and even one court’s opinion in the Bartz case, in which parties argue that mass production of material by machines somehow fulfills the original purpose of copyright law. For those following Thaler v. Perlmutter, Dr. Thaler’s recent petition for cert at the U.S. Supreme Court argues that the Copyright Office’s affirmation of the human authorship requirement “defies the constitutional goals from which Congress was empowered to create copyright, namely, the creation and dissemination of creative works.”

This is wrongly stated, but the attempt to undermine the human authorship doctrine is, of course, consistent with Big Tech’s ideological view that individual human agency is an outdated nuisance—a bug to program around in pursuit of a grand, tech-utopian dream. Or to put it another way, the scorn for human authorship is in harmony with Mark Zuckerberg recently proclaiming that the future of companionship is one in which we have more robot friends than human ones.

Long after the dust settles on the legality of AI model training with protected works, fundamental questions of social justice in a world with generative AI will need to be addressed. In addition to many examples in which these products are already causing social harm—most acutely adverse psychological effects among children and teens—generative AI can potentially swallow, or perhaps smother, economic opportunities for diversity of expression, perhaps even accelerating the current trend of government censorship.

In that regard, I find it astounding that the copyright skeptics in academia, generally aligned with the political left, promoted democratization by portraying copyright as a tool of censorship rather than as a mode of empowerment for authors. While the free market is not a perfect answer to all challenges, the spike in sales of Art Spiegelman’s Maus after it was banned in 2022, or even the market’s response forcing the restoration of Jimmy Kimmel are, in my view, examples of why the speech right and copyright more often act in concert as a force for democratic principles.

Notably, the IP skeptics have inveighed against strong copyright rights by arguing social justice principles, as if, for instance, the right of access without copyright’s boundaries is the moral equivalent of the right to read campaign now confronting real censorship. Moreover, social justice for the artist is often omitted by that school’s overstating a purely utilitarian foundation for copyright. Not only is that perspective belied by history, but it seems to me that for an IP regime to encompass social justice values, some natural rights principles must apply.

In fact, in this light, I think it is noteworthy that rather than pursue a federal publicity right in response to AI’S potential to replicate anyone’s likeness, the NO FAKES Act currently before the U.S. Congress borrows principles from trademark, copyright, and right of publicity to create a novel IP right in one’s voice and likeness. Perhaps this moves the U.S. one step closer to some of the moral rights principles that animate copyright law in other countries.

It is no surprise that the tech industry so aggressively attacked intellectual property rights by selling the chimera of “democratization.” IP rights, at their best, foster an expansive and diverse world of competing ideas, whereas Big Tech’s interests—and the interests of authoritarians—are best served by organizing people into bunkers of competing realities. This epistemic crisis, I firmly believe, explains the wanton destruction of so many democratic institutions. And with generative AI, of course, it is easy to see how mass automation of synthetic material, posing as creative and informative works, is likely to exacerbate this problem.

Democratization is a beguiling term that no longer describes movement toward democratic forms. It exploits the language of democracy to mask an ideological contempt for democratic institutions and individual agency. It is a centrifugal force driving people, communities, and nations apart—a path to social, economic, and political anarchy, where bullies win and justice does not exist. Consequently, I would ask those in IP academia to be vigilant about the distinction between democratization and democracy and to push back on the rhetoric of the former in the hope that we can still rescue the latter.

On AI Removing Creative Constraints

constraints

A paper by Eleonara Rosati titled The future of the movie industry in the wake of generative AI: A perspective under EU and UK copyright law states the following:

…some have stressed the opportunities presented by the implementation of AI, including by advancing claims, like those made by AI video studio The Dor Brothers that at AI tools ‘are actually a purer form of expression, offering the most direct link between the artist’s brain and the end result, without the compromises required in large productions or the constraints that come with complex shoots’

The quote by The Dor Brothers raises a question I imagine many creators ask all the time—why use generative artificial intelligence (GAI) to produce anything? The answers will vary depending on the medium of expression—from the sculptor who says “never” to the audio-visual producer who says “all the time”—because beyond the legal issues triggered by GAI, the technology reframes the question of what it means to create works of expression in the first place. And this includes the question as to whether removing “constraints” is either conducive or harmful to the creative process.

Although motion picture production entails more non-creative constraints (e.g., large investments and complex logistics) than all other media, I would caution that even in filmmaking, constraints are generative of creativity. In the same way that working around copyright constraints tends to produce new creative expression, this is also true of the limitations inherent to each medium. Moreover, the idea that an artist does not want to confront the constraints of her chosen medium is misguided, and the passion to confront those challenges is not a matter of mere nostalgia.

I get what the Dor Brothers are saying, of course. The AV producer can go from script to screen without any of the costly and cumbersome production work that will frustrate, if not substantially alter, the original vision. Screenplay material becomes prompts, and the GAI outputs the AV material without the need for cameras, actors, sets, etc. Still, the extent to which the outputs more “purely” represent the mental conception in the “artist’s brain” is both a question of copyrightability and artistic integrity. How much control the AV prompter has over the resulting material will determine the extent to which he owns the rights in that material, but even with extensive control, the “purity” of the expression is not necessarily preserved by the removal of constraints.

Notwithstanding many useful applications of AI, including for various aspects of artistic work, all the talk about “democratizing” creative expression (i.e., without developing skills in various crafts) reprises that question Why? for many artists. If you don’t enjoy dealing with the constraints of clay, paint, words, light, sounds, etc., then you probably don’t really like the process of creative expression. Again, that’s not just luddite’s nostalgia. Creative expression (art) results when the unique, imperfect human confronts, learns from, and eventually masters the constraints of a chosen medium. As my friend Sandra Aistars, copyright professor and, recently, a fine art student, writes about the distinction between AI “training” and human learning:

… instead of predicting “what comes next,” artists studying masterworks are taught to unlock “how” the original artist has conveyed what is foundational to an image’s storytelling. This requires patience, humility and empathy on the part of the artist asking to learn. But it ends in developing one’s own aesthetic judgment and voice.

Aistars describes engaging with the constraints of visual artmaking by retracing the steps of masters in order to discover her own aesthetic. The process is physical, intellectual, and emotional at the same time, and most artists would ask why a creator would want to avoid engaging with the medium in this way. It is the act of confrontation and the artist’s unique mode of problem solving where the meaningful act of creating occurs for the individual.

Using GAI as a cheap or free assistant to write a boilerplate email or report makes sense, but the hyped-up marketing of these products, challenging users to push AI to “write poems or novels” is asking people to fool themselves. You might have a brilliant idea for a premise, but if you don’t want to grapple with the constraints of writing, you’re not a novelist any more than you’re the “boyfriend” of an AI companion.

Turning back to the Dor Bros.’ comment, because motion picture production entails thousands of constraints that are not necessarily generative of creativity, their point has some merit in certain applications of the medium. Specifically, a lot of their work appears to be commercial advertising at this time, and the utilitarian nature of marketing material, combined with the attraction of low-cost, fast-turnaround production cannot be ignored. Nevertheless, I would caution against the idea of a “pure” link between an artist’s “mental conception” and the end result by means of removing constraints.

Motion picture production still entails many constraints that are generative of creative expression. Just as Aistars chooses to wrestle with the possibilities and limitations of a particular pencil in her hand, the filmmaker has a complex set of “tools” that include the constraints of physical space, light, camera and lens characteristics, performers, writing, time, which must be confronted to find the film’s unique voice. And as any film student can tell you, working around constraints has often resulted in moments considered to be works of cinematic genius.

Naturally, GAI is already used to reduce or eliminate certain drudgeries in creative production, and although this also implies reducing or eliminating various jobs, that is a separate matter from the philosophical premise to which this post responds. In general, I am skeptical that a seamless, constraint-free transition from mental conception to creative expression is desirable, even if it is achievable. Constraints define the various artistic media, and it seems more likely that expression through GAI will evolve as its own medium with its own constraints. Otherwise, if GAI’s only purpose is to synthetically displace the creative process in all media, the results will likely be as bloodless as the computers that made them.


Photo by: Ponsulak

AI Works Do Not “Compete” with Works of Authorship

"compete"

Many arguments advocating the view that AI training does not conflict with copyright rights  share a common fallacy, namely that AI outputs represent “competitive” works that copyright law was intended to promote. This error appears in Judge Alsup’s opinion in Bartz et al. v. Anthropic AI, in a report published by AI Progress, and in an amicus brief filed by three law professors in Thomson Reuters v. Ross Intelligence.

The competition fallacy rejects the notion of “market dilution,” which may be a novel, but not unfounded, consideration under factor four of the fair use analysis. Traditionally, the fourth factor inquiry considers whether the particular use of the work(s) in suit might potentially harm its/their market value. The question does not ordinarily weigh harm to, say, all sound recordings by virtue of having scraped all sound recordings to produce a machine that makes different sound recordings. Because the dilution principle would strongly disfavor AI developers, its proponents seek to portray the outputs as “competitive” works envisioned by copyright law.

As a threshold principle, although authors may be said to be in “perfect competition” or non-competition with one another, copyright’s purpose is not to promote competition but to promote as much diverse expression as authors may be inspired to create. Notwithstanding the use of AI as tools of human expression, it is an error to refer to AI outputs in general as “works of expression,” “works of authorship,” or any term of art that seeks to portray purely machine-made outputs as an intended consequence of copyright.

The inapt use of these terms perhaps indicates a hope that courts won’t notice the omission of the human authorship doctrine. But so long as that doctrine is affirmed (and it should be), we should only refer to AI outputs by other terms—choose the pejorative “slop” or the neutral “material” as you wish—in order to place outputs in proper context to copyright law. As argued here several times, if the material at issue is not protected by copyright on the basis that it is not made by a human, then its existence cannot be described as a “work” incentivized by copyright.

Judge Alsup’s Error in Bartz et al. v. Anthropic AI

Although the Bartz case itself is settled and will not be appealed, the reference to “competition” made by Judge Alsup will probably be litigated again in one or more of the many active AI training lawsuits. In his opinion, he wrote…

…Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act.

In addition to buying into the anthropomorphic comparison between machine learning and human education, Judge Alsup’s hypothetical “explosion of competing works” set off an explosion of criticism, including by Judge Chhabria of the same circuit, ruling in Kadrey et al. v. Meta. His response states…

…when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take.

I agree with this critique though, even here, would prefer not to see the word “competing.” Competition is generally creative whereas market dilution is generally destructive and closer to describing GAI’s effect on works of authorship and on copyright law. In fact, Judge Chhabria opines in Kadrey that, “As for the potentially winning argument—that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution—the plaintiffs barely give this issue lip service.” This kind of signal that the market dilution theory has legal foundation is why I believe its critics rely on the competition fallacy.

The Report by AI Progress

The report titled AI Models: Addressing Misconceptions About Training and Copyright, written by Anna Chauvet and Karthik Kumar, PhD, engages in the competition fallacy, albeit in a context I tend to find baffling. I say this because the report first presents an in-depth technical argument as to why AI training does not entail infringing conduct but then devotes equal effort arguing that model training is fair use.

If this document were a legal response in court, not presenting a fair use defense would likely be malpractice, but as an experts’ report, the fair use discussion casts doubt on the scientific rationale for non-infringement. Where there is truly no basis for infringement, there is no reason to mention fair use. Yet, in rejecting a consideration of market dilution under factor four, the authors of the report reprise the competition fallacy thus:

If a new work does not use protected expression, it does not matter whether it competes in the same genre and market as prior works. An increase in competitive creative works is precisely the growth of creative expression that the Copyright Act was intended to promote.

Notably, the authors rely on traditional fourth factor jurisprudence in the first sentence but seek to foreclose any consideration of AI’s novelty by mischaracterizing its outputs in the second sentence. The authors err by referring to the mass outputs of a GAI as “creative works” at all, let alone as the type of works intended to be promoted by the Copyright Act. As stated in an earlier post, I believe the courts should recognize that GAI lacks any technological precedent and, therefore, should not demur to plow new ground in considering market dilution as a destructive consequence worthy of deep consideration.

Further, it is concerning when any party implies that the AI outputs do not matter in considering whether the training process is fair use. This is nonsensical and inconsistent with case law. The courts absolutely consider the specific utility of technologies that potentially infringe copyright rights, and it is impossible to weigh the purpose or market effect of an AI product without considering its outputs. After all, the outputs are its purpose.

The Professors’ Brief in Thomson Reuters v. Ross

Law professors Brian L. Frye, Jess Miers, and Mateusz Blaszczyk filed a brief in Thomson Reuters v. Ross, principally to argue that the headnotes copied from Westlaw are not properly subjects of copyright. Here, I will set that question aside, and frankly, whether the courts find the headnotes to be sufficiently original for protection is not particularly relevant to the challenges posed by AI.

In the latter part of the brief, though, the professors reprise the competition fallacy, stating, “The problem with the dilution theory is that producing similar, but noninfringing works is precisely the kind of competition copyright is supposed to promote.” Again, this statement is legally correct but factually misleading. If the professors want to argue, as they do, that the Federal Trade Commission et al. err by advancing a market dilution theory based on unfair competition law, perhaps that debate is worth having. But general statements that AI outputs, as non-works of authorship, inherently fulfill the intent of copyright law are flatly wrong. The brief continues…

The Act seeks to promote the creation of original works of authorship, not to protect authors against competition. Indeed, it is axiomatic that the purpose of copyright is to benefit the public by encouraging marginal authors to produce and distribute additional works of authorship.

Copyright does not protect authors against informal competition with one another, but as stated, that has nothing to do with “competing” with machines that output non-works by non-authors. As for the reference to marginal authors, this is both misstated and misguided. First, the Copyright Act is agnostic as to which authors become popular and which ones remain “marginal.” Second, as is always the case, it is the independent authors who are more likely to be marginalized into oblivion by unregulated, unethical, and unlicensed AI products.

There are several briefs filed in Thompson Reuters by many familiar names in anti-copyright circles, and no doubt, they all repeat some variation on the competition fallacy. But copyright law exists to incentivize human beings to devote time, talent, and energy to the production and distribution of creative and informative works. Copyright does not exist to mass-produce material, content, slop, or stuff by any other name that lacks creative expression by humans.

Mistakenly portraying the outputs of GAI as generally “competitive” with works of authorship produces a cascade of doctrinal errors that swirl in eddies of circular logic around the pillar of the fourth fair use factor. The courts should decline to be dragged into that vortex and, as Judge Chhabria at least implied, they should be willing to consider the diluted streams of creativity that can result from wanton use of AI.


Photo by Fizkes