On AI Removing Creative Constraints

constraints

A paper by Eleonara Rosati titled The future of the movie industry in the wake of generative AI: A perspective under EU and UK copyright law states the following:

…some have stressed the opportunities presented by the implementation of AI, including by advancing claims, like those made by AI video studio The Dor Brothers that at AI tools ‘are actually a purer form of expression, offering the most direct link between the artist’s brain and the end result, without the compromises required in large productions or the constraints that come with complex shoots’

The quote by The Dor Brothers raises a question I imagine many creators ask all the time—why use generative artificial intelligence (GAI) to produce anything? The answers will vary depending on the medium of expression—from the sculptor who says “never” to the audio-visual producer who says “all the time”—because beyond the legal issues triggered by GAI, the technology reframes the question of what it means to create works of expression in the first place. And this includes the question as to whether removing “constraints” is either conducive or harmful to the creative process.

Although motion picture production entails more non-creative constraints (e.g., large investments and complex logistics) than all other media, I would caution that even in filmmaking, constraints are generative of creativity. In the same way that working around copyright constraints tends to produce new creative expression, this is also true of the limitations inherent to each medium. Moreover, the idea that an artist does not want to confront the constraints of her chosen medium is misguided, and the passion to confront those challenges is not a matter of mere nostalgia.

I get what the Dor Brothers are saying, of course. The AV producer can go from script to screen without any of the costly and cumbersome production work that will frustrate, if not substantially alter, the original vision. Screenplay material becomes prompts, and the GAI outputs the AV material without the need for cameras, actors, sets, etc. Still, the extent to which the outputs more “purely” represent the mental conception in the “artist’s brain” is both a question of copyrightability and artistic integrity. How much control the AV prompter has over the resulting material will determine the extent to which he owns the rights in that material, but even with extensive control, the “purity” of the expression is not necessarily preserved by the removal of constraints.

Notwithstanding many useful applications of AI, including for various aspects of artistic work, all the talk about “democratizing” creative expression (i.e., without developing skills in various crafts) reprises that question Why? for many artists. If you don’t enjoy dealing with the constraints of clay, paint, words, light, sounds, etc., then you probably don’t really like the process of creative expression. Again, that’s not just luddite’s nostalgia. Creative expression (art) results when the unique, imperfect human confronts, learns from, and eventually masters the constraints of a chosen medium. As my friend Sandra Aistars, copyright professor and, recently, a fine art student, writes about the distinction between AI “training” and human learning:

… instead of predicting “what comes next,” artists studying masterworks are taught to unlock “how” the original artist has conveyed what is foundational to an image’s storytelling. This requires patience, humility and empathy on the part of the artist asking to learn. But it ends in developing one’s own aesthetic judgment and voice.

Aistars describes engaging with the constraints of visual artmaking by retracing the steps of masters in order to discover her own aesthetic. The process is physical, intellectual, and emotional at the same time, and most artists would ask why a creator would want to avoid engaging with the medium in this way. It is the act of confrontation and the artist’s unique mode of problem solving where the meaningful act of creating occurs for the individual.

Using GAI as a cheap or free assistant to write a boilerplate email or report makes sense, but the hyped-up marketing of these products, challenging users to push AI to “write poems or novels” is asking people to fool themselves. You might have a brilliant idea for a premise, but if you don’t want to grapple with the constraints of writing, you’re not a novelist any more than you’re the “boyfriend” of an AI companion.

Turning back to the Dor Bros.’ comment, because motion picture production entails thousands of constraints that are not necessarily generative of creativity, their point has some merit in certain applications of the medium. Specifically, a lot of their work appears to be commercial advertising at this time, and the utilitarian nature of marketing material, combined with the attraction of low-cost, fast-turnaround production cannot be ignored. Nevertheless, I would caution against the idea of a “pure” link between an artist’s “mental conception” and the end result by means of removing constraints.

Motion picture production still entails many constraints that are generative of creative expression. Just as Aistars chooses to wrestle with the possibilities and limitations of a particular pencil in her hand, the filmmaker has a complex set of “tools” that include the constraints of physical space, light, camera and lens characteristics, performers, writing, time, which must be confronted to find the film’s unique voice. And as any film student can tell you, working around constraints has often resulted in moments considered to be works of cinematic genius.

Naturally, GAI is already used to reduce or eliminate certain drudgeries in creative production, and although this also implies reducing or eliminating various jobs, that is a separate matter from the philosophical premise to which this post responds. In general, I am skeptical that a seamless, constraint-free transition from mental conception to creative expression is desirable, even if it is achievable. Constraints define the various artistic media, and it seems more likely that expression through GAI will evolve as its own medium with its own constraints. Otherwise, if GAI’s only purpose is to synthetically displace the creative process in all media, the results will likely be as bloodless as the computers that made them.


Photo by: Ponsulak

AI Works Do Not “Compete” with Works of Authorship

"compete"

Many arguments advocating the view that AI training does not conflict with copyright rights  share a common fallacy, namely that AI outputs represent “competitive” works that copyright law was intended to promote. This error appears in Judge Alsup’s opinion in Bartz et al. v. Anthropic AI, in a report published by AI Progress, and in an amicus brief filed by three law professors in Thomson Reuters v. Ross Intelligence.

The competition fallacy rejects the notion of “market dilution,” which may be a novel, but not unfounded, consideration under factor four of the fair use analysis. Traditionally, the fourth factor inquiry considers whether the particular use of the work(s) in suit might potentially harm its/their market value. The question does not ordinarily weigh harm to, say, all sound recordings by virtue of having scraped all sound recordings to produce a machine that makes different sound recordings. Because the dilution principle would strongly disfavor AI developers, its proponents seek to portray the outputs as “competitive” works envisioned by copyright law.

As a threshold principle, although authors may be said to be in “perfect competition” or non-competition with one another, copyright’s purpose is not to promote competition but to promote as much diverse expression as authors may be inspired to create. Notwithstanding the use of AI as tools of human expression, it is an error to refer to AI outputs in general as “works of expression,” “works of authorship,” or any term of art that seeks to portray purely machine-made outputs as an intended consequence of copyright.

The inapt use of these terms perhaps indicates a hope that courts won’t notice the omission of the human authorship doctrine. But so long as that doctrine is affirmed (and it should be), we should only refer to AI outputs by other terms—choose the pejorative “slop” or the neutral “material” as you wish—in order to place outputs in proper context to copyright law. As argued here several times, if the material at issue is not protected by copyright on the basis that it is not made by a human, then its existence cannot be described as a “work” incentivized by copyright.

Judge Alsup’s Error in Bartz et al. v. Anthropic AI

Although the Bartz case itself is settled and will not be appealed, the reference to “competition” made by Judge Alsup will probably be litigated again in one or more of the many active AI training lawsuits. In his opinion, he wrote…

…Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act.

In addition to buying into the anthropomorphic comparison between machine learning and human education, Judge Alsup’s hypothetical “explosion of competing works” set off an explosion of criticism, including by Judge Chhabria of the same circuit, ruling in Kadrey et al. v. Meta. His response states…

…when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take.

I agree with this critique though, even here, would prefer not to see the word “competing.” Competition is generally creative whereas market dilution is generally destructive and closer to describing GAI’s effect on works of authorship and on copyright law. In fact, Judge Chhabria opines in Kadrey that, “As for the potentially winning argument—that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution—the plaintiffs barely give this issue lip service.” This kind of signal that the market dilution theory has legal foundation is why I believe its critics rely on the competition fallacy.

The Report by AI Progress

The report titled AI Models: Addressing Misconceptions About Training and Copyright, written by Anna Chauvet and Karthik Kumar, PhD, engages in the competition fallacy, albeit in a context I tend to find baffling. I say this because the report first presents an in-depth technical argument as to why AI training does not entail infringing conduct but then devotes equal effort arguing that model training is fair use.

If this document were a legal response in court, not presenting a fair use defense would likely be malpractice, but as an experts’ report, the fair use discussion casts doubt on the scientific rationale for non-infringement. Where there is truly no basis for infringement, there is no reason to mention fair use. Yet, in rejecting a consideration of market dilution under factor four, the authors of the report reprise the competition fallacy thus:

If a new work does not use protected expression, it does not matter whether it competes in the same genre and market as prior works. An increase in competitive creative works is precisely the growth of creative expression that the Copyright Act was intended to promote.

Notably, the authors rely on traditional fourth factor jurisprudence in the first sentence but seek to foreclose any consideration of AI’s novelty by mischaracterizing its outputs in the second sentence. The authors err by referring to the mass outputs of a GAI as “creative works” at all, let alone as the type of works intended to be promoted by the Copyright Act. As stated in an earlier post, I believe the courts should recognize that GAI lacks any technological precedent and, therefore, should not demur to plow new ground in considering market dilution as a destructive consequence worthy of deep consideration.

Further, it is concerning when any party implies that the AI outputs do not matter in considering whether the training process is fair use. This is nonsensical and inconsistent with case law. The courts absolutely consider the specific utility of technologies that potentially infringe copyright rights, and it is impossible to weigh the purpose or market effect of an AI product without considering its outputs. After all, the outputs are its purpose.

The Professors’ Brief in Thomson Reuters v. Ross

Law professors Brian L. Frye, Jess Miers, and Mateusz Blaszczyk filed a brief in Thomson Reuters v. Ross, principally to argue that the headnotes copied from Westlaw are not properly subjects of copyright. Here, I will set that question aside, and frankly, whether the courts find the headnotes to be sufficiently original for protection is not particularly relevant to the challenges posed by AI.

In the latter part of the brief, though, the professors reprise the competition fallacy, stating, “The problem with the dilution theory is that producing similar, but noninfringing works is precisely the kind of competition copyright is supposed to promote.” Again, this statement is legally correct but factually misleading. If the professors want to argue, as they do, that the Federal Trade Commission et al. err by advancing a market dilution theory based on unfair competition law, perhaps that debate is worth having. But general statements that AI outputs, as non-works of authorship, inherently fulfill the intent of copyright law are flatly wrong. The brief continues…

The Act seeks to promote the creation of original works of authorship, not to protect authors against competition. Indeed, it is axiomatic that the purpose of copyright is to benefit the public by encouraging marginal authors to produce and distribute additional works of authorship.

Copyright does not protect authors against informal competition with one another, but as stated, that has nothing to do with “competing” with machines that output non-works by non-authors. As for the reference to marginal authors, this is both misstated and misguided. First, the Copyright Act is agnostic as to which authors become popular and which ones remain “marginal.” Second, as is always the case, it is the independent authors who are more likely to be marginalized into oblivion by unregulated, unethical, and unlicensed AI products.

There are several briefs filed in Thompson Reuters by many familiar names in anti-copyright circles, and no doubt, they all repeat some variation on the competition fallacy. But copyright law exists to incentivize human beings to devote time, talent, and energy to the production and distribution of creative and informative works. Copyright does not exist to mass-produce material, content, slop, or stuff by any other name that lacks creative expression by humans.

Mistakenly portraying the outputs of GAI as generally “competitive” with works of authorship produces a cascade of doctrinal errors that swirl in eddies of circular logic around the pillar of the fourth fair use factor. The courts should decline to be dragged into that vortex and, as Judge Chhabria at least implied, they should be willing to consider the diluted streams of creativity that can result from wanton use of AI.


Photo by Fizkes

Public Knowledge Post on AI & Fair Use Misses the Mark

fair use

Patrick Gallaher at Public Knowledge recently posted an article about AI training with protected works, proposing to distinguish between piracy and fair use. Not to begin on a pedantic note, but the article is subtitled “Words Matter” because it claims that piracy is a provocative, non-legal term, so I have to respond by saying this is wrong. Although we think of “piracy” today as enterprises like The Pirate Bay, courts have often used the term “piracy” to mean “copyright infringement.” For instance, the seminal fair use case Folsom v. March (1841) uses the word thirteen times as in this quote:

“….it is as clear, that if he thus cites the most important parts of the work, with a view, not to criticise, but to supersede the use of the original work, and substitute the review for it, such a use will be deemed in law a piracy.”

So, Gallaher is making a semantic fuss over nothing. If a contemporary court holds that AI training with protected works is copyright infringement, then this conduct may both legally and colloquially be called piracy.

As to the substance of the post, Gallaher asserts that AI training is inherently fair use, which is too broad a claim. The fair use doctrine defies generalization, and the facts in one case involving a particular AI and one type of work may have limited influence on the result of a case involving a different AI and different type of work. Or to put that another way, the incomplete fair use inquiry conducted in Bartz v. Anthropic, involving a class of literary works, likely predicts almost nothing about the eventual outcome in UMG et al. v. Udio or Disney et al. v. Midjourney, involving sound recordings and visual works respectively.

Gallaher states that AI training is transformative under fair use factor one (the purpose of the use). Indeed many articles of this nature rely on the assumption that this finding should be obvious and should carry the weight of the fair use analysis. “Copying for training is transformative: it uses the works for a fundamentally different purpose from the original, much like indexing websites for search engines or scanning books for text analysis,” he writes. And that’s all he writes about one of the most vexing doctrines in fair use weighing the most challenging technology ever confronted by copyright law.

Of course, even in one sentence, Gallaher manages to hide (or expose) the distinction that the purpose of many GAI products is to produce works without authors. This fact is highly distinguishable from the two analogies he cites and, as the courts will surely recognize, presents a novel challenge to the constitutional intent of copyright law. This is a consistent fallacy with every article of this nature—claiming that AI is the most revolutionary tech in history, but despite this novelty, we have ample case law to conclude that training is fair use.

Perhaps the courts will not wholly agree with my view that a purpose which does not serve the goals of copyright cannot favor fair use, but in Kadrey v. Meta, Judge Chhabria stated, “Courts can’t stick their heads in the sand to an obvious way that a new technology might severely harm the incentive to create, just because the issue has not come up before.”

Although that sentence prefaces a consideration of market dilution under factor four, the words “harm the incentive to create” allude directly to copyright’s core purpose and, so, implicates the purpose of the GAI to “create” in lieu of authors. And that goes to the question of transformativeness. So, no, it is not enough to say that a use which serves a different purpose is per se transformative, especially when that different purpose is to do exactly what creators do and, in the process, moot the utility of copyright law.

Notably, Gallaher masks the substitutional purpose of GAI by referring to it in general as technology that serves a “public good” and which provides “broad benefits.” The plain fact, though, is that we do not know this to be true. Simply because a product is new, being widely adopted, and/or has investors chomping at the bit is not evidence that its purpose is categorically beneficial. Far from it. We are already flooded with AI products causing serious harm, triggering liability claims for negligence and wrongful death, and launching emotional Senate hearings.

In this regard, I have argued that the courts have no factual basis for even defining the purpose of AI training. Although we should not talk about AI as a monolith, the counterpoint to that principle is that it’s generally the same process ingesting the same creative works, whether the AI product is used for scientific research, military applications, medical diagnosis, CSAM, social engineering attacks, or addicting children to establish dangerous “friendships” with machines.

Even if the courts are unwilling to apply such a broad sweep of uncertainty in a copyright context, it is sufficient to say that we have little reason to assume that AI is generally beneficial in the world of creative and cultural production.  And whether the folks at Public Knowledge know it, the courts are at liberty to look beyond the four factors in weighing fair use, especially when they are presented with considerations that have little or no precedent.

It is important to keep in mind that on fair use factor one, the often unwieldy transformative doctrine splits into two distinct branches of case law. The traditional purpose of fair use, dating back to English courts, is to allow new creative expression to flourish, particularly expression that comments upon the work being used. Fair use cases of this nature most often address one user of one work for one clear purpose.

The more contemporary branch of factor one considerations entails mass use of protected works for a technological purpose that can strain against the fair use doctrine. Simply put, fair use was not developed or codified into statute to provide raw materials for technological products, and as discussed in other posts, when the Second Circuit allowed scanning millions of books for Google Books, it stated that the case “tests the boundaries of fair use.” GAI products, whether used for good or ill, lie well outside those boundaries.

Articles like Gallaher’s are not really making a copyright argument but are instead drawing readers to conclude that copyright owners should be required to subsidize AI development whether they like it or not. Other than assuming that Public Knowledge is still a PR firm for Big Tech, I don’t know why an organization with that name takes such a position when countless parents, educators, artists, lawmakers, and medical experts are insisting upon guardrails and oversight for AI in recognition of social harm already being done. This same sober approach must apply to copyright rights and, at the very least, foster a licensing regime that avoids undermining foundational IP principles.


Image source: H9images