Dr. Thaler Asks SCOTUS to Decide the GAI Authorship Question

Thaler

On October 9th, Dr. Stephen Thaler, a computer scientist and owner of the generative AI “Creativity Machine,” filed a petition for cert with the U.S. Supreme Court, seeking to overturn the DC Circuit opinion in Thaler v. Perlmutter. As discussed in several posts, Dr. Thaler’s machine independently and autonomously produced an image he titled “A Recent Entrance to Paradise” which he then submitted for registration with the U.S. Copyright Office. The Office rejected the application on the basis that protected works must entail human authorship—a rule Dr. Thaler alleges has been invented by the Office and which he claims is unsupported either by statute or the foundational purpose of copyright law. In March, the circuit court ruled in favor of the Copyright Office.

If the Supreme Court agrees to review the case, I hope it will not be distracted by the selection and arrangement of precedent employed in the sleight of hand at work in Thaler’s arguments—all true out of context, all fatal in context. As argued in an earlier post, the Courts should view GAI for the novelty that it is and not be unduly guided by case law addressing technologies that are in no way analogous. In particular, the Court should be wary of petitioners conflating distribution technology (e.g., the VCR) with production technology (e.g., Photoshop) in context to GAI, which, for purposes of the “authorship” question, is solely productive.

In that light, Thaler avers in the petition, “The last watershed moment when technology changed the world of copyright was in 1884, when this Court expanded the definition of ‘writing’ to include photography.” True, but the Burrow-Giles Court affirmed that photography is a mode of expression based on its inference of the human photographer’s creative choices evident in the photograph “Oscar Wilde No. 18.” (See post here.)

By no means did the Court forecast that a robot camera roaming the streets of New York, autonomously capturing photographs, would be the “author” of those images, or that the robot’s owner would necessarily be the “author,” absent evidence of his creative effort in the resulting images. On the contrary, even the 1884 decision, when the Court opined that “ordinary” photographs might not be protected, it prefaced what is today a court’s duty to separate the protectable elements from the unprotectable elements—i.e., identify the human authorship in the machine-made photograph.

Naturally, Dr. Thaler does not claim that “Creativity Machine” is the “author,” but rather that he is the “author” by virtue of his owning the machine and instructing it to make something. This argument relies on an erroneous claim, echoed by others, that copyright law developed as a framework for causing “creative stuff” to exist by any means. This is wrong, and to say otherwise would be to embrace the doctrine of “copyright by adoption” (i.e., literally finding a work and claiming authorship), or it would erode copyright boundaries like the idea/expression dichotomy or “things found in nature.”

For instance, a machine autonomously making a visual work is analogous to nature making beautiful things, none of which may be claimed as works of authorship on their own. Instead, a human must make a work of expression out of things found in nature, and as a rule, the “work of nature” will be excluded from protection. This is settled doctrine, which was also raised in Burrow-Giles, where the defense argued erroneously that for photographer Sarony to claim copyright in the photo at issue, this would be tantamount to granting him a copyright in the person of Oscar Wilde.

Creative Works Are Not Apples

As he did the lower court, Dr. Thaler seeks to force intangible property to conform to the rules of tangible property, as if the purpose of copyright were to grow and harvest works out of computers like fruit from an orchard. The petition states, “There is a longstanding principle in property law, sometimes referred to as accession or the fruit of the tree doctrine, under which a property owner owns property made by their property.” True. And copyright law doesn’t work that way. As discussed in an earlier response to this argument, even if a tree produces a mutant fruit that is uniquely aesthetic as a “sculptural work,” the landowner is not the “author” of that “work” as a matter of law. (He is, however, free to enter it in a county fair contest.)

All people are persons, but not all persons are people.

The petition states, “This Court need look no further than the fact that nonhuman authors such as corporations and other nonhuman ‘persons’ have been authors without controversy for over a century.” Actually, the Court need look no further than the fact that no copyright rights have ever been first vested in non-human “persons.”

Thaler continues to overemphasize the fact that the statute does not explicitly state that “authors” are human beings. The Court should not find this argument persuasive unless the aim is to unravel countless statutes in which natural “persons” are clearly implied without being explicitly distinguished from non-natural “persons.” For instance, if we follow Thaler’s logic arguing that because non-natural “persons” like corporations can be “authors” under copyright’s work made for hire (WMFH) doctrine, does this mean that a CEO who causes a corporation to fail is guilty of homicide?

Clearly, the law is replete with distinctions between natural “persons” and non-natural “persons,” though perhaps contending with AI recommends revising Title I to affirm that “persons” does not include machines. In the meantime, if the Court even hears Thaler, it should reject the semantic game posing as a rule of statutory construction. Of course, the history of copyright law does not address machines independently making “works.” It’s never happened before. And again, the WMFH doctrine affirms the human authorship principle because it requires transfer of rights vested in a human to another party, which may be a non-natural “person.”

Case Law Chaos

If the Court were to adopt Thaler’s overbroad theory of “authorship,” it would erode critical boundaries central to the analysis of a claim of infringement. For instance, with “A Recent Entry to Paradise,” on what basis would a court begin the substantial similarity analysis when some other GAI owner’s machine independently produces an allegedly infringing copy of the work? What testimony will be admissible to describe the original “author’s” expression and to compare and contrast that with testimony describing the “follow-on author’s” expression—to say nothing of a fair use defense for a machine!

As stated in the past, I think such scenarios make chaos out of the principles of “access” and “independent creation” in the analysis. Does GAI B have knowledge of, and access to, GAI A’s “works?” Or do the owners of either GAIs, as putative “authors,” have such knowledge and access based on the datasets inside their respective machines? When facing potential liability, a defendant won’t want to have knowledge of anything, but the absence of a nexus linking man’s creative intent to machine’s production is precisely why the former is not an “author” of the outputs of the latter. The potential (I think inevitable) chaos of litigating such a case would appear to moot the purpose of registering a claim of copyright in the first place.

The Court should recognize that Thaler’s overly expansive view of “authors” would be destructive to copyright law (and perhaps other law), while the Copyright Office has already articulated a more nuanced approach to GAI that recognizes its potential utility as a tool of expression. Even where the Office’s guidance and decisions (e.g., with Kris Kashtanova and Jason Allen) may be awkward in its early encounters with GAI, its affirmation of the human authorship doctrine is not an administrative invention, let alone one “overstepping its authority” into “policymaking,” as Thaler alleges.

Thaler relies substantially on a general principle that is true but, in this case, misleading. Copyright law has always adapted to new technology, but where technology has caused its contours to expand or contract, Congress and the courts have sought to retain core principles—sometimes encompassing new modes of expression, sometimes drawing new limits on the protection of creative expressions. In this regard, the Copyright Office already articulates both an early rationale and administrative process for works made in “collaboration” with GAI, and just as with photography, courts will eventually adjudicate the scope of protection in any of those works that may be infringed.

Instead, Dr. Thaler’s petition, at times written as if all use of GAI is automatically ineligible for protection, begs the Court to overreact and unravel centuries of copyright tradition by finding that no human creator is needed. This is an error as affirmed in the lower court. But for the fact that artificial intelligence is such a hot topic, I would predict the Supreme Court would decline to review this claim.


Image shown: “A Recent Entry to Pardise”

Sound AI Policy Demands Protecting Diverse Expression

diverse

Over the past 13 years, I have repeated variations on the theme that strong copyright rights are essential because a healthy democracy requires a diverse, professional creative sector. Typically, I have advocated this perspective to refute the claim by the copyleft that copyright rights are in conflict with the speech right. Now that we are in climate in which creators find themselves asserting their rights against forces that wish to limit their copyright rights, policymakers should take note that America’s diversity of expression is a critical advantage in developing world-class artificial intelligence (AI).

AI developers want policymakers to believe that respecting copyright rights is a barrier to innovation and America winning the AI race with China. A key point animating that claim is that because China doesn’t respect IP, America must likewise cheat to avoid being at a strategic disadvantage. This is misguided. As a baseline principle, it should not be the policy of the United States to risk $2 trillion worth of economic value in the creative industries as a chip on the AI roulette wheel, let alone to play the game on China’s terms.

“Silicon Valley watchers worry that enthusiasm for AI has turned into a bubble that has increasingly loud echoes of the mania around the internet’s infrastructure build-out in the late 1990s,” reports the Wall Street Journal. That AI investment and hype are incubating a bubble is almost certain, but development will persist, and the question remains how to build world-class AI products that can fulfill some of the grand promises of the industry. One answer to that question is that America must not squander the advantage gained by its foundational commitment to IP, which has fostered a rich supply of creative and cultural wealth unmatched anywhere in the world.

The Science Requires Symbiosis

Although I do not personally like to think of creative works as raw materials comparable to iron or coal or fossil fuels, I shall indulge the AI developers in this analogy to stress the point that respecting copyright rights is not an option. Simply put, if development of AI severely degrades the incentive of people to create new works, then both the creative economy and the AI models will collapse. The issue is described in a July 2024 paper by a team of computer scientists, explaining that as LLMs train on “recursively generated data,” the data becomes poisoned and the models collapse.

It’s not a complicated idea. Just like a power plant cannot burn the same lump of coal twice, or a farmer cannot grow anything in soil depleted of nutrients, LLMs need a steady supply of new material. “To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time,” the paper states. [emphasis added]

Thus, while the current generation of LLMs train on billions of available works, the next generation will cannibalize unless there is new creative work on which to continue training. In other words, to achieve the best ambitions of AI, the American plan cannot envision a future without authors of copyrightable works—neither a future in which AI replaces too many creators nor one that harms the incentive to create, which lies at the heart of IP.

If nothing else, the science of model collapse insists that to develop world-class AI, it is essential to uphold the American traditions of strong IP and the absolute certainty of the speech right for all creators. These two forces have long been a source of strength that no other nation can claim. Simply put, if we attack the copyright rights of authors from one side and/or speech rights from another, then we will cede ground that is ours to lose.

Copyright Rights Support Safer AI Products

While there may be ideological differences about the value of diversity in creative expression, there is allegedly no political divide when it comes to AI and harm to young users.  Senators Hawley and Durbin in a recent Judiciary Committee hearing, repeated the theme that Congress is in lock step when it comes to Big Tech’s utter lack of responsibility to young users in particular. And although it is popular to say that we shouldn’t talk about AI as a monolith, this principle should not fail to acknowledge that the LLM trained on unlicensed creative works may be the exact same product that lacks guardrails and safety features.

Chat GPT is Chat GPT, whether it is used to expedite scholarly research or by a sixteen-year-old to encourage and plan his suicide. And one ethical consideration is clear enough:  no authors of literature volunteered to have their work “train” a machine that would help a child hurt or kill himself. And perhaps the policy considerations are more closely aligned than they appear. Although licensing creative works for AI “training’ will not in itself foster safety design in the products, a licensing regime will necessitate one thing that both IP owners and countless parents are demanding—transparency.

Consumers, lawmakers, and the courts should decline to accept that AI development must be allowed to proceed apace in the “black box” without public scrutiny over what goes into or comes out of the box. Members of Congress on both sides of the aisle who today find themselves regretting past grants of unconditional immunity to the Tech Bros should heed the warnings of the creative community and this time, reject the culture of moving fast and breaking things. As Joseph Gordon-Levitt describes in a recent post about AI, we can move fast but also steer. In fact, I don’t think steering is optional.


Image source by: TRAIMAK

Inapt Mixing of National Defense with Copyright Law Raises Broader Questions

defense

Dr. Rebecca Grant, Vice President of Lexington Institute, alleges in a recent post that copyright owners—specifically the bogeyman of “Hollywood”—form an obstacle to national security in the effort to win the AI cold war with China. Out of respect for her credentials as a security expert, I shall assume that all of Dr. Grant’s specific references to the role of AI in defense operations are accurately, if broadly, stated. But her references to fair use, akin to Open AI’s March memo to the Office of Science and Technology Policy, are misguided, if not intentionally misleading.

After overstating the significance of the findings in the Bartz and Kadrey opinions, Dr. Grant writes, “It’s not Hollywood’s job to factor in national security.  Discussions around AI and creativity will persist.  However, a new key issue is indeed emerging: allowing American AI models to continue training on the highest-quality data is crucial to maintaining the lead over China.”  Later in the post, she cites literature, especially works of fiction, as “high-quality data,” and even if she is correct on the science and its role in defense operations, the erroneous invocation of fair use (and Hollywood for that matter) casts doubt over the entire premise of her argument—especially in the era of Trump 2.0.

To reiterate what I said in response to Open AI leaning too hard on national security in this context, Dr. Grant’s argument has nothing to do with the affirmative defense of fair use in a copyright infringement claim. Fair use is a case-by-case consideration for the courts, and in fact, Judge Chhabria in Kadrey forecasts several reasons why AI developers in many of the other cases are likely to lose. What Dr. Grant is advocating is a blanket exemption to mass copyright infringement with the urgency of “beating China” on an accelerated timeline. If that is the goal, it is neither practical nor well founded to even mention fair use, but it is hard to say whether copyright law is simply outside Dr. Grant’s wheelhouse, or if “fair use” is being used rhetorically, like so many terms mangled by the current administration, to mask the real agenda.

Should Authors Subsidize AI Whether They Like It Or Not?

The days of the WWII-era total effort are both a distant memory and inapplicable to a cold war, while the principle that politics stop at the water’s edge is one of many American virtues eroded to an empty slogan. I do not dispute that China is an adversary, but sadly, the beacon of U.S. democracy is a sputtering fluorescent tube in the hands of an administration that emulates the policies and propaganda of our adversaries. Presently, more Americans are concerned about becoming the Chinese Communist Party (CCP) than beating it in the new AI-driven cold war. And I will venture to guess that some version of that view is held by most authors of creative and cultural works protected by copyright law.

In any era, it would be wrong to insist that the nation’s authors and artists are required to contribute to a national defense effort, but even if every novelist in the country were committed to that idea, Dr. Grant overlooks a few complications. For instance, she acknowledges that the AI war relies upon the massive private investment of Big Tech in contrast to historical defense initiatives funded by the government. But she appears unconcerned by the assertion that America’s authors should subsidize all AI applications, including all commercial interests of the tech companies, on the basis that some uses will be applied in defense operations.

Absent the fog in the present climate, a traditional conservative might notice that this model looks a lot like Communism. Even defense related acquisition in the U.S. entails billions in public dollars paid to private industry contractors and suppliers who pay for the materials and labor needed to build planes, write software, etc. Yet, stunningly, the press release linking me to Dr. Grant’s post dings “lefty Hollywood” while the post itself argues that America’s authors must be compelled to underwrite both the commercial and military applications developed by Big Tech.

This sleight of hand is exacerbated by the fact that Web 2.0 was the principal catalyst in weakening the American democracy allegedly being defended against CCP. The current administration’s assault on the pillars of democracy is a direct consequence of a dangerously disinformed electorate, a paradoxical result of the “information age,” which includes, by the way, unfettered access by adversarial nations to the American public. As a policy matter, the abysmal failure of the information revolution is a consequence of allowing Big Tech to do whatever the hell it wants, which stands in stark contrast to the litany of rules and regulations that guide the manufacture of the many warplanes Dr. Grant knows so well.

Generative AI already reveals its many toxic applications—from sextortion to parties imitating the voice of the Secretary of State, and so on. The implications for AI deepening epistemic crisis and dangerous chicanery are obvious to a thirteen-year-old reading her first sci-fi novel. And amid that chaos, few authors or artists trust any of the parties leading the AI cold war—from the President to the techbros—to give a damn about democracy in the opaque and classified race with the Chinese.

I’ve said it before and will keep saying it: there is no virtue in beating the CCP if it means becoming the CCP in the process. The ethical development and application of AI goes to the heart of whether the U.S. will completely abandon the principles that made it the indispensable democratic leader—a reputation already damaged without the assistance of artificial intelligence. In that context, I think it is fair to say that few authors and artists would consider it a patriotic duty to contribute their works to “national security” while simultaneously allowing the richest companies on Earth to exploit their labors for profit without permission or compensation.


Photo by: Tanaonte