Are Creators Aligned on Artificial Intelligence?

creators

One of many challenges with adoption of generative AI (GAI) tools is whether creators are willing to demonstrate a degree of solidarity on the matter—i.e., apply the principle we generally call fair trade. If Creator A uses a GAI that might be harmful to Creator B in a different field, and so on, will most creators take this broader perspective in a group effort to demand ethical uses of GAI?  Moreover, this question becomes intertwined with copyright because the use of GAI is a subject of evolving legal doctrine, meaning that creators who want to produce commercial content outside their core talents should be aware that the material produced may not be protectable under the law.

Two simple examples would be the self-published book author who might use an AI voice app to produce an audiobook, and the documentary filmmaker who might use an AI music generator to produce a soundtrack for a film. In both examples, creators in other fields—voice actors and composers respectively—are potentially harmed by the development and use of these AI tools, but 1) will the author and filmmaker take that consideration into account?; and 2) will the sound recordings in either case be protected by copyright?

In the case of the author using AI in lieu of hiring a narrator to produce the audiobook, I predict that under current doctrine, the sound recording would not be protected by copyright law because there is no human performance captured in that recording. Thus, remedies for any piracy of the audiobook would rely solely on the protection of the underlying literary work, which is effective—but if the sound recording is also protected and registered, that would be two works infringed instead of one.

This increases the potential damages for infringement, which puts the author/owner in a stronger position if she needs to take legal action. By this example, authors’ interests may be seen as aligned with those of professional book narrators. Hiring a narrator will not only achieve better quality in the reading, but capturing the human performance is also a basis for copyright attaching to the sound recording.

Similar considerations would apply to the filmmaker with the GAI soundtrack, although there may be other factors that provide the AI music with some protection we don’t find with the AI audiobook. One factor that may become relevant is whether the filmmaker can show that he exerted sufficient creative control over the final sounds. If so, he may be able to defend a claim of copyright in the soundtrack, but we are likely several years and a few lawsuits away from clear guidance on this question.

Another consideration with the soundtrack may be the Copyright Office’s current view that material using assistive AI “within a larger work” is protected. Creators should be careful about interpreting that broad language because constituent works that stand alone—and this would apply to a soundtrack for a film—would logically not be independently protected.

Of course, there are many GAI products that allow one type of creator to avoid hiring another type of creator for a given project. Some of this is inevitable, and it is not necessarily unethical or bad for creative culture. That said, even with ethically trained and ethically used AI tools, the copyright considerations should be weighed by the individual creator (i.e., do they care about protecting what might not be protectable?), but also collectively by all creators contributing to a new ecosystem.

Since 1978 in the U.S., the default is automatic copyright protection, even if most rights are never enforced. But as GAI is used to produce a lot of material that is not protected, it is hard to predict what effect this might have on copyright overall. Even older than automatic copyright with the 1976 Act, the human authorship principle fosters a new tension for creators who may wish to combine GAI and human-authored work. As a response to that tension, it would be a mistake in my view to overwrite the “human spark” doctrine and simply protect any material that “walks and talks” like a creative work. This isn’t just an emotional appeal to anthropocentrism but rather a conviction that copyright would become meaningless—even unconstitutional—by eroding the incentive rationale for its existence.

Regardless of the theoretical questions addressed in this post, I believe that as a practical matter, creators should think carefully about how and when to use GAI for various projects. As an ethical consideration, perhaps if you’re opposed to “scraping” in your industry, then opposing it in others is the right view to take. But as a business consideration, if what you’re making is meant to have commercial value, AI-generated might mean not protected by copyright—and that means even if you spend money and time on it, it isn’t yours.

Reversal in Thomson Reuters Case May Bode Well For Copyright Owners Against AI

Thomson

It has already caught the attention of most copyright watchers that Judge Bibas of the District Court for the District of Delaware (3rd Circuit) reversed his own 2023 summary judgment ruling in the copyright AI case Thomson Reuters v. Ross Intelligence. Thompson, which owns the legal research database Westlaw, sued Ross for copyright infringement after the latter built its competitive AI-powered search tool by copying over 2,000 headnotes from Westlaw. Headnotes contain summaries which the court finds are sufficiently original for copyright protection, and it also finds that the material is protected under the doctrine of “selection and arrangement.”

Judge Bibas found copyright infringement of the headnotes and held that Ross’s defenses, including fair use, all failed. It is the fair use ruling that may be predictive of outcomes in other cases alleging copyright infringement for the purpose of AI training. Notably, Judge Bibas held that fair use factors one and four favored Thomson, and that Thompson prevails overall on fair use. To review, my amended summaries of the fair use factors are:

  • The purpose of the use, including whether the use is commercial.
  • The nature of the work used (i.e., whether it is more factual or creative).
  • The amount of the work used, including whether the “heart” of the work was used.
  • The potential market harm to the work used, namely whether the use substitutes for a use that the copyright owner retains the exclusive right to exploit in the market.

In Thomson, it is compelling that the court finds factors one and four go to plaintiff and that these carry the fair use finding overall when factors two and three go to defendant Ross. I say this because in other AI cases involving ingestion of entire visual, musical, and literary works, factors two and three will surely go to plaintiffs, and the AI developers can only hang their hopes on factors one and four.

Under factor one, Judge Bibas held that Ross’s use was clearly commercial and that the purpose of the use serves essentially the same purpose as the works used. Here, the opinion uses language that could benefit other AI developers, but not necessarily. It states:

Ross was using Thomson Reuters’s headnotes as AI data to create a legal research tool to compete with Westlaw. It is undisputed that Ross’s AI is not generative AI (AI that writes new content itself). Rather, when a user enters a legal question, Ross spits back relevant judicial opinions that have already been written.  

On the one hand, that parenthetical note that Ross is “not generative” could be cited to argue that generative AIs like Midjourney or Udio favor a finding of transformativeness under factor one. But several of the strongest cases against the developers present similar evidence of “spitting back” copies of the material ingested. Further, as emphasized in Udio and Suno, two AIs built on ingesting protected sound recordings, plaintiffs also present a strong argument that the GAIs serve the same purpose as the works used and, therefore, the purpose is not transformative.

Where a court finds under factor one that an infringing use serves the “same purpose” as the work used, this will often, quit logically, lead to finding market substitution under factor four. Here, Judge Bibas is forthright in his reversal about his initial instinct to leave factor four as a question of fact to be decided by the jury. Most notably, in my view, he writes…

I worried whether there was a relevant, genuine issue of material fact about whether Thomson Reuters would use its data to train AI tools or sell its headnotes as training data. And I thought a jury ought to sort out “whether the public’s interest is better served by protecting a creator or a copier.”

Those first considerations from 2023 reprise two familiar arguments presented in fair use defenses, but which courts have generally found unpersuasive in recent high-profile cases. That the plaintiff is not yet in the market being pursued by the defendant has been held erroneous because it fails to properly consider the “potential” market for the protected works. Next, the “public interest” (i.e., for innovation’s sake) argument has been held too broad in major fair use cases—except Google v. Oracle, which is an outlier for several reasons. Thus, in reversing his thinking, Judge Bibas writes…

Even taking all facts in favor of Ross, it meant to compete with Westlaw by developing a market substitute. And it does not matter whether Thomson Reuters has used the data to train its own legal search tools; the effect on a potential market for AI training data is enough. Ross bears the burden of proof. It has not put forward enough facts to show that these markets do not exist and would not be affected.

Because factor two is generally considered the least important and factor four has long been considered the most important, Judge Bibas rests on that precedent to find that fair use overall favors Thomson. What this decision could signal for many AI developers who have copied millions of creative works to train their models is that the generalized “innovation and important for society” arguments will find slippery footing when they argue fair use.

The Human Condition is Inherent to Copyright Law

human

Last week, oral arguments were presented before the D.C. Circuit Court of Appeals on the question of whether copyright protection is conditioned on human authorship. Dr. Stephen Thaler, developer of a Gen AI he calls “Creativity Machine,” submitted a visual work made entirely by that machine to the U.S. Copyright Office for registration in 2022. He disclosed the fact that the image was solely produced by GAI, and the Copyright Office rejected his claim on the basis that copyright only protects works created by humans.

Thaler contested the USCO rejection, and when the Office stuck to its guns, he filed suit (see Thaler v. Perlmutter) claiming that the “human authorship” doctrine is an invented regulation because it is not stated anywhere in the Copyright Act. He further argues that the work made for hire (WMFH) doctrine, which is part of copyright law, should be read to find that the owner of a GAI may claim copyright in the outputs of that system. Failing that, he presents theories analogizing copyright rights to laws governing the ownership, transfer, or sale of other forms of property. Thaler lost in the district court and then appealed to the D.C. Circuit.

Never read too much into oral arguments, but the panel did not sound very impressed with the theories presented by Dr. Thaler’s counsel. But setting aside those tea leaves and the still-evolving doctrine regarding works produced by a combination of human creativity and GAI, it is essential that the human authorship doctrine itself should not be disturbed by any court or Congress. Doing so would render copyright (and possibly other laws) meaningless.

The Purpose of Copyright

A critical flaw in Thaler’s reasoning implies that copyright exists for the purpose of causing “creative” works to be manufactured by any means. This is wrong. Even if we only begin with the IP clause of the Constitution, the most basic and widely accepted purpose of copyright is incentive. “Authors” of “writings” are given exclusive rights (with certain limits) to control their works so as to incentivize the production and distribution of those works. In fact, copyright skepticism leans hard on the “devil’s bargain” view that rights should be more limited than they are. And while I have called that view cynical, I would hope those same skeptics agree that their entire critique evaporates without the human authorship doctrine.

Machines are not incentivized to create, and copyright does not encompass an incentive to the human to invent a machine that makes artificial “writings.” If anything, that would be the purpose of patent law. Moreover, Thaler’s reasoning moots the copyright rights he seeks to claim through registration. At scale, a mere handful of corporate-owned AIs autonomously generating millions of works implies a market in which few, if any, of those works has any monetary or cultural value. But as a legal matter, the rights attached to each work would likely be unenforceable because several judicial doctrines and tests are warped by the hypothetical case in which one AI has allegedly infringed the rights of another AI.

The “Human Condition” is Not in the Copyright Act

The autonomous AI cannot produce “writings” as a matter of law because the AI is not an “author” as a matter of law. Far from an invented rule by the Copyright Office, “authors” are humans by all historical, statutory, judicial, and common-sense reasoning. As stated in my last post about this case, while it is true that the Copyright Act does not explicitly define “authors” as humans, this is also true of other statutes (e.g., labor laws) because it would be redundant to the point of absurd to imagine such laws applying to parties other than humans. When laws say “voters,” “employees,” “victims,” “perpetrators,” etc., the consistent absence of the clarification “human” is not an indication that these terms might apply to “cats,” “monkeys,” or “machines.”

The Copyright Office may be unique among agencies in explicitly stating that its specialty in law is about protecting “human authorship,” but this guidance exists because the Office recognized, long before GAI, that a registration applicant might present a work that appears “creative” but which he did not create. An example listed in the USCO Compendium is a “piece of driftwood shaped by the sea” into what might look like an aesthetic sculptural work. Hence, it is a short logical leap to analogize pleasant looking objects shaped by nature to works output solely by a GAI like “Creativity Machine.”

Thaler’s Work Made for Hire Theory

The work made for hire (WMFH) principle is a means by which copyright rights are transferred from the author to a business entity. Because copyright rights are vested automatically in the author the moment a work is fixed in a tangible medium of expression, the author must convey in writing a transfer of those rights—even to a business she herself owns in its entirety. That transfer may be executed prior to works being created, as indeed it would be with an employment contract, but this does not alter the fact that what is transferred in advance are rights which can only vest in the human employee who posses the agency to both create works and execute a transfer of her rights.

More broadly, although it is true that non-human entities called “corporations” are “persons” for the purpose of administering various laws, and it’s true that entities can own copyrights, the corporate fiction does not alter the fact that humans remain at the center of activity regarding various rights and liabilities. For instance, if the human managers of a company use machines to engage in criminal copyright infringement, it is the humans, not the machines, who will be sent to jail.

As a threshold matter, nothing output by the autonomous AI is a work of authorship because no rights were, or could ever be, automatically vested in the machine upon fixation of those works. There simply are no copyright rights to be transferred. The fact that a corporate entity invents and/or owns the GAI is irrelevant and is little more than a distraction as an analogy. Human employees or contractors are not owned by their employers, which brings us to another inapt comparison some have made.

Let’s Leave Slavery Out of This

During oral arguments, one of the judges asked whether the creative works of slaves were ever protected by copyrights owned by masters. It’s an analogy I’ve heard raised before, and although I do not presume to read anything into the judge’s question, the comparison is as ugly as it is unfounded. A slave is a human being robbed of all agency, and even if one could find evidence, under that ancient and barbaric practice in American law, that a slave’s “writing” was claimed by a master for copyright protection, this would say nothing about the “human authorship” question presented in Thaler. If nothing else, the hypothetical theft of creative expression from the slave by a master did not inform the WMFH doctrine in modern copyright law. Meanwhile, a GAI neither possesses agency to rob nor rights of any kind to infringe. The AI is neither slave nor employee any more than Dr. Thaler’s coffee maker.

Analogies to Other Property

Dr. Thaler argues that ownership of the GAI may be analogized to the farmer who is, of course, the first owner of the fruits of his apple orchard. Here, a court should make short work of the fact that copyright law distinguishes physical property (chattel) from copyright rights. For instance, the buyer of a painting does not necessarily purchase the copyright rights in the expression fixed in that painting. The market value of certain original works of modern art is unrelated to the fact that some of those works may not qualify for copyright protection at all. The value of a first edition book as a rare object is unrelated to the fact that the expressive work in that book may be long in the public domain. Examples abound.

Under these same principles, Dr. Thaler is absolutely permitted to print a copy of the AI-generated image called “An Entrance Into Paradise” and to sell that print as a physical object in any market he chooses. That print and any subsequent prints he makes comprise his physical property just like the farmer owns his apples before sale. But just as none of those apples embodies any copyrightable expression, the same is true of “A Recent Entrance to Paradise,” even if the observer sees in that image something we call “art.”

Put another way, if the skin of one mutant apple inexplicably manifests an image of the orchard where it was grown, the farmer is free to sell this marvel to the highest bidder, but he has no claim of copyright in the image itself. He can print tee shirts and mugs and change his orchard’s name to capitalize on the miracle apple. He can obtain a trademark on the image used in commerce and even start a Cult of the Miraculous Apple, if he is so inclined. But just like the sea-sculpted piece of driftwood, the phenomenon of the image on the apple is not a work of “authorship” under copyright law.

Big Tech’s Big Lie

To those tech companies who might advocate Dr. Thaler’s position, it is hard not to admire their gall. Not only has the tech industry spent about 20 years trying to eradicate the copyright rights while claiming to support creators, but it has done so behind a wall of separation between the “conduct” of its machines and potential liabilities stemming from that “conduct”—even for dangerous design flaws. By Silicon Valley’s logic, if a social media algorithm motivates a teen suicide, the tech company should be shielded as a neutral party, but if the same company’s AI generates some music, the company should own copyrights in that work as if the AI made “creative choices” at the direction of the company’s owners. These and other hypocrisies are on full display as we confront artificial intelligence.


Image