The War on Smart Continues with Firings at the Library of Congress and Copyright Office

copyright

Since the election, I have been so certain these events were coming that I almost pre-drafted this post, but I didn’t want to be a jinx. Then when it did happen, I hardly knew what to say. Every day, we are confronted with evidence that the only agenda of Trump 2.0 is wanton destruction. I am increasingly convinced that Trump himself is a mindless wrecking ball set in motion by cyberlibertarians like Peter Thiel, animated by the “Dark Enlightenment” ravings of Curtis Yarvin, and determined to raze America and on the wasteland, erect their fever-dream of techno-feudalist “corporate zones.” Of course, I only think that because that’s what they explicitly said they want to achieve.

Last Thursday, around 7:00pm Eastern, Trump fired Librarian of Congress, Dr. Carla Hayden, and then, on Saturday afternoon, he dismissed Register of Copyrights and Director of the U.S. Copyright Office, Shira Perlmutter. So, the first conflict in this shit-show (maybe) will be jurisdictional. Although the Librarian is a presidential appointee, the Library and Copyright Office comprise employees of the Legislature. Thus, a president doesn’t necessarily have the authority to fire Copyright Office or other Library staff, and as of yesterday, Trump appointees, Paul Perkins as acting Register and Brian Nieves as acting Deputy Librarian, were both turned away from the Library according to a story in Wired.

Trump also named his former defense counsel and current assistant AG Todd Blanche as acting Librarian, and other reports on social media stated that DOGE employees arrived at the Copyright Office and were also turned away. So, this is now a right and proper clusterfuck wholly consistent with the Trump brand of governance. Whether Congress will assert its authority in this mess is this week’s question along with the other question: Why?  Why aim the Trump wrecking ball at the Library of Congress and the U.S. Copyright Office?

Dr. Hayden was a natural target for the hate-machine wing of the MAGAverse. She’s Black, an Obama appointee, and easy to accuse—and was accused—of fostering a “leftist DEI” agenda.[1] Notably, the White House email she received about her termination accused her of “putting inappropriate books in the library for children,” which is classic Trumpism—not only an invented allegation about Dr. Hayden, but one which highlights that these people have no idea what the Library of Congress is or does.[2]

Trump firing the Librarian of Congress is an attack on the institution consistent with other administrative attacks on cultural and scientific institutions throughout the country. Appointing a DOJ attorney to be acting Librarian signals hostility toward the purpose and meaning of the Library—a hostility in harmony with the rhetoric of Goebbels wannabe Stephen Miller, who talks about incubating a nationalist, “patriotic” culture. As any student of history knows, that’s a recipe for stupid—not just book stupid, but can’t feed oneself stupid. Today’s editorial in Time by Alondra Nelson, explaining her resignation from both the National Science Foundation and the Library of Congress, makes the point. She writes:

In both these roles, over the past few years, I’ve been asked to serve on diverse bodies that offer guidance about how the Executive and Legislative branches can be stewards of knowledge and create structure to enable discovery, innovation, and ingenuity. In the instance of the National Science Board, this ideal has dissolved so gradually, yet so completely, that I barely noticed its absence until confronted with its hollow simulacrum.

The Copyright Office Debacle

The day before Register Perlmutter was fired, the Copyright Office released a “pre-publication draft” of its third report on copyright and AI—this one addressing training AI models with protected works. Because the Office does not release “pre-publication” drafts, it was clear as of late Saturday, that the report had been quickly distributed ahead of the anticipated firing of the Register. In this regard, Shira Perlmutter is owed a debt of gratitude for publishing the Office’s statement at a time when over 40 lawsuits are asking the courts to weigh the issue of AI training with protected works. But why was the report controversial and a likely catalyst in Trump’s desire to fire Perlmutter?

The pending third report made the AI developers anxious because, as with any report of its kind, the Office would aim to provide guidance on the legal considerations and implications without necessarily choosing sides. The AI developers have been lobbying hard in the press, and with appeals to the administration, to argue that training AI models with protected works is per se fair use. Further, they have argued as a matter of national interest that “winning” the AI competition with China is too important to allow copyright rights to interfere. Not that there’s any merit to that claim, but between Trump’s addle-minded concept of nationalism and the fact that he’s elbow-deep in Big Tech’s booty, copyright interests have been anxious since the inauguration that he might stick his mittens into the mix.

Meanwhile, at the end of April, Tom Jones of the right-wing American Accountability Foundation told the Daily Mail that it was time Trump, “…show Carla Hayden and Shira Perlmutter the door and return an America First agenda to the nation’s intellectual property regulation.” So, in addition to being a general dickhead about “leftist agendas,” Jones reiterates the incoherent proposal that America can hope to “lead” in IP while its Executive promotes brain drain across multiple sectors and attacks independent thought and diverse creativity wherever it can. Because attrition like the resignation of Alondra Nelson is exactly how you lose in IP, in case anyone’s keeping score.

So, Dr. Hayden’s ouster, packaged in the rhetoric of “anti-DEI,” is an attack on yet another cultural institution (one that houses the world’s largest collection), while the broadside at the Copyright Office may be solely about the reports on AI. Regardless, Trump gets to feed red meat to the MAGA nationalists and his Big Tech patrons at the same time, and where we are now is a lot of uncertainty pending chaos. Further, if Trump 2.0 is indeed designed to soften the ground for a techno-feudalist makeover, then tanking the creative economy would fit that agenda, as would allowing AI developers to build whatever they want without oversight of any kind.

One can only imagine who an illiterate, demented, and seditious facsimile of a president would tap as the next full-time Librarian of Congress—my money’s been on Kid Rock since November—but it will likely be someone whose idea of a national library roughly matches Pete Hegseth’s comprehension of national defense. Everything about Trump 2.0 mimics weak, authoritarian nations, including the aforementioned effort to foster a nationalist culture. To achieve that aim, authoritarians will always try to exsanguinate the professions supported by copyright law while they destroy evidence of historical fact and scientific discovery—a narrative housed within and symbolized by the Library of Congress.

Not since the British torched the place in 1814 have occupants of Washington shown so much contempt for America’s genuine capacity for greatness. More profound than the hostile takeover of the Kennedy Center, removal of historical material from federal institutions and websites, or cutting the NEA budget, the concurrent dismissals of the Librarian and Register should be understood as an attack on the intent of the IP clause of the Constitution to “promote science and the useful arts.”


[1] As an aside, I criticized Dr. Hayden in 2016 for her improper and sudden ouster of then Register of Copyrights Maria Pallante, and I would likely still quarrel with her on that and other copyright matters today, but all that has nothing to do with these recent events.

[2] The Library houses the world’s largest collection of EVERYTHING. It is not comparable to a library in your local community.

Photo by Extender_01

Comments to the Copyright Office on Artificial Intelligence

Below are the responses I submitted to selected questions in the U.S. Copyright Office Notice of Inquiry and request for comments on artificial intelligence.

8.1. In light of the Supreme Court’s recent decisions in Google v. Oracle America and Andy Warhol Foundation v. Goldsmith, how should the “purpose and character” of the use of copyrighted works to train an AI model be evaluated? What is the relevant use to be analyzed? Do different stages of training, such as pre-training and fine-tuning, raise different considerations under the first fair use factor?

In my view, neither case is helpful to a putative AI-developer defendant regarding the first factor question being asked. Under Warhol, there is no colorable defense that the purpose of AI training is to achieve “critical bearing” on the works used, and it is difficult to imagine how most, if any, developers would make such a claim. In Oracle, the reimplementation of APIs for the development of new computer programs is highly distinguishable from, for instance, copying a billion images in their entirety to “train” a machine to generate images. Further the Court cautioned that Oracle is narrowly tailored to computer programs as the copyrightable works in question.

8.5. Under the fourth factor of the fair use analysis, how should the effect on the potential market for or value of a copyrighted work used to train an AI model be measured? Should the inquiry be whether the outputs of the AI system incorporating the model compete with a particular copyrighted work, the body of works of the same author, or the market for that general class of works?

This is one example in which generative AI can upend copyright doctrine. Even where a use may involve millions of works (e.g., Google Books), the fourth factor considers potential harm to the market for the works, whereas generative AI—if it does not produce market substitutes—primarily represents potential harm to authors and future authorship. While it is beyond the scope of copyright to protect creative jobs against technological changes per se, the consideration in the context of “training” should be expansive and doctrinal—namely that a potential threat to “authorship” cannot, by definition, “promote the progress” of “authorship.” Therefore, the fair use defense should be unavailable to the developer in this context. Where the AI does produce market substitutes, case law should be sufficient, including the fair use factor four inquiry.

10.3. Should Congress consider establishing a compulsory licensing regime? If so, what should such a regime look like? What activities should the license cover, what works would be subject to the license, and would copyright owners have the ability to opt out? How should royalty rates and terms be set, allocated, reported and distributed?

No. With the development of streaming platforms, compulsory licensing devastated the livelihoods of songwriters. AI development is still nascent, and we cannot predict how the market will change in the future. Legislation of this nature is likely to be short-sighted and may lock in regimes that fail to serve authors.

  1. In order to allow copyright owners to determine whether their works have been used, should developers of AI models be required to collect, retain, and disclose records regarding the materials used to train their models? Should creators of training datasets have a similar obligation?

Yes. Both parties should be obligated to collect and maintain records to foster transparency. That said, considering the lack of good faith shown by the tech sector in complying with regimes like the DMCA, provisions of this nature should contain actual penalties for failure to comply. Any proposal based upon a civil liability shield in exchange for compliance should be dismissed as a non-starter and a waste of time.

  1. Under copyright law, are there circumstances when a human using a generative AI system should be considered the “author” of material produced by the system? If so, what factors are relevant to that determination? For example, is selecting what material an AI model is trained on and/or providing an iterative series of text commands or prompts sufficient to claim authorship of the resulting output?

The threshold for copyrightability must still be a “modicum of originality” contributed by the human, whether the tool used is a camera, keyboard, paintbrush, or, perhaps, an AI-based application. It is certainly possible for a human using a generative AI to be the “author” of the work, though, I believe, not the developer of the AI model or system. Merely selecting the materials used to “train” an AI cannot be considered “authorship,” let alone “authorship” of what may be tens of millions of outputs. Further, because it appears that most models require the ingestion of tens of millions of works in order to “learn,” this volume of collection by means of, for example, internet scraping is too indiscriminate to be considered “selection.”

  1. Is legal protection for AI-generated material desirable as a policy matter? Is legal protection for AI-generated material necessary to encourage development of generative AI technologies and systems? Does existing copyright protection for computer code that operates a generative AI system provide sufficient incentives?

My answer to the first two questions is No, and, therefore, the third question is moot. In general, it is not desirable to attach copyright rights to AI-generated material any more than it is desirable to vest civil rights in robots. As stated above, if individual humans use certain AI-based tools to create works of expression, the use of these tools should not automatically disqualify the entire work from copyright protection.

But we must be cautious about vesting copyright rights in enterprise-scale, corporate production of works by, for instance, an AI-developer/producer. Beyond posing a threat to the careers of creative professionals (and to the cultural value of creative work), at a certain point, the application of copyright law itself may become irrelevant and/or unconstitutional. For instance, if generative AI were to foster an oligopoly of developer/producers, it is conceivable that copyright enforcement would become meaningless. Imagine the chaos (or futility) arising from a claim that AI-Developer Alpha allegedly infringed the work of AI-Developer Beta. Such a scenario raises difficult questions of standing and, as noted below, may frustrate the substantial similarity inquiry to the point of irrelevance.

Meanwhile, if these potential outcomes result in shrinking the population of working authors and the diversity of works, this would be anathema to the constitutional purpose to “promote progress.”

  1. Does the Copyright Clause in the U.S. Constitution permit copyright protection for AI-generated material? Would such protection “promote the progress of science and useful arts”? If so, how?

While patent protection for AI systems may promote the “useful arts,” copyright protection for AI-generated works does not inherently promote “science” (a.k.a.) new creative expression. Again, if generative AI is likely to reduce the number of working “authors” in the U.S., this is offensive to the Progress Clause in Article I. That the American Framers could only conceive of human “authors” is not just a technicality of history. Whether in 1787 or today, law, like art, is a human construct that serves no purpose beyond human experience. As I have stated a few times here, it is contradictory to believe that one can promote “authorship” while obviating the role of “authors.” That the Progress Clause could be interpreted to encompass this result defies textual, doctrinal, and historical reason.

  1. Can AI-generated outputs implicate the exclusive rights of preexisting copyrighted works, such as the right of reproduction or the derivative work right? If so, in what circumstances?

Yes, and it’s happening right now. Users of generative AI are producing famous—and famously protected—reproductions and derivative visual works of, for instance, Marvel characters. While copyright owner Disney may not elect to take legal action against, for instance, parties sharing these outputs on social media, there is nothing about the use of AI in these examples that militates against finding that the copies and derivatives are infringing. In fact, it seems certain that if these outputs were to be used commercially, the inevitable litigation would be short work for the court.

As to whether the AI developer may be liable for these copies and derivatives, it seems straightforward to find that if, for instance, “Daredevil” was input for training and “Daredevil” was later output by the system, then the developer may be liable for both direct and secondary copyright infringement. Direct copyright infringement in violation of §106(1) occurs during input, and secondary infringement arises due to the developer’s failure to prevent the infringing work from being output.

  1. Is the substantial similarity test adequate to address claims of infringement based on outputs from a generative AI system, or is some other standard appropriate or necessary?

Combined with 24 below.

  1. How can copyright owners prove the element of copying (such as by demonstrating access to a copyrighted work) if the developer of the AI model does not maintain or make available records of what training material it used? Are existing civil discovery rules sufficient to address this situation?

The doctrines of “substantial similarity” and “access” may both be challenged by generative AI. First, if a given system is prompted to produce common or popular probability outcomes, it may generate thousands or millions of similar works, all of which are potentially non-infringing under the doctrine of “independent creation.” For example, the AI user who is unfamiliar with the work of Karla Ortiz may inadvertently produce a work that is “substantially similar” to one of Ortiz’s images, but the copy may be said to be “independently created” by that individual. The difficulty is novel because “independent creation” is historically anomalous whereas AI could make it rampant.

This also goes to the question of “access” and liability for that “access.” The AI developer undoubtably has “access” to any work fed into its model, but it seems unlikely the user of the AI can be said to have had “access” in this context. Then, because of the increased likelihood of multiple, “independently created” similar works, proving “access” may be moot with regard to the liability of the individual user of the AI.

  1. If AI-generated material is found to infringe a copyrighted work, who should be directly or secondarily liable—the developer of a generative AI model, the developer of the system incorporating that model, end users of the system, or other parties?

Assuming the user of the AI knowingly made the infringing work, he/she is liable for direct infringement. Secondary liability for any developer may arise if the allegedly infringing work is a copy, or is “substantially similar,” to a whole work that was fed into the model or subsequent application.

25.1. Do “open-source” AI models raise unique considerations with respect to infringement based on their outputs?

I fail to see why “open source” would alter the consideration of an alleged infringement.

  1. If a generative AI system is trained on copyrighted works containing copyright management information, how does 17 U.S.C. 1202(b) apply to the treatment of that information in outputs of the system?

As in pre-AI considerations, §1202(b) denies the AI developer an “innocent infringer” defense.

  1. Should the law require AI-generated material to be labeled or otherwise publicly identified as being generated by AI? If so, in what context should the requirement apply and how should it work?

A requirement to identify AI-generated material likely addresses topics outside the scope of copyright law. Presumably, the public would be best served by labels used to mitigate fraud and other forms of misinformation, and the complications arising from that intent are best left to the FTC, Congress, and agencies other than the Copyright Office. Even where AI may be used to create forgeries, this is already criminal conduct, and copyright plays little or no role. But unless a creative work is illegally presented as the work of a named artist, there is no compelling interest per se in notifying the public that a work, or part of a work, was generated by AI.

  1. Should Congress establish a new federal right, similar to state law rights of publicity, that would apply to AI-generated material? If so, should it preempt state laws or set a ceiling or floor for state law protections? What should be the contours of such a right?

Again, this not a copyright matter, but a federal ROP may address some of the already rampant, unethical uses of AI where the potential harm to both the infringed party and the public is significant. The 25 state ROP laws do not address, for instance, the potential harm caused by AI’s capacity to generate “in the style of” works, especially in the commercial market.

If ROP law is expanded, it should 1) apply to all persons, not just celebrities; 2) anticipate and remedy AI-enabled harms stemming from misappropriation of likeness for purposes other than commercial advertising; and 3) not restrict expressive uses of AI-generated likeness for purposes (e.g., biographical films) that fall within the scope of protected speech.

In a world in which all media travels the globe instantly, a federal ROP statute would seem to be the only sensible framework in which to address the myriad potential harms. As for preemption, this may raise the cost of potential litigation by eliminating the option of a state filing, but further study into this as a matter of civil procedure is required.

  1. Are there or should there be protections against an AI system generating outputs that imitate the artistic style of a human creator (such as an AI system producing visual works “in the style of” a specific artist)? Who should be eligible for such protection? What form should it take?

This concern could be addressed in a new, federal ROP statute while leaving undisturbed the doctrine that copyright does not protect “style.” While an amendment to the Copyright Act akin to VARA could be written to encompass a narrow protection for “style,” the intent seems better suited to ROP. Additionally, it may be easier and more effective to write a new law for the purpose of federal ROP than to amend the Copyright Act, especially when the U.S. is not a moral rights jurisdiction vis-à-vis copyright. Finally, it should be noted that, if protection of this nature were enforceable, it may create new licensing opportunities for artists and prospective commercial users.

As to application of the law, again, forgery is covered by the criminal code, but the most likely harm would seem to be commercial use of “in the style of” works in a manner that may implicate the artist’s reputation and/or deny her a commission or a licensing opportunity. Still, if such a right were to be established, exceptions would be required so that what we might call “reminiscent of” is distinguishable from “in the style of.” This is a highly subjective consideration that may draw lessons from “substantial similarity” doctrine, even if the new right does not sound in copyright. For instance, prompting an AI for a work “in the style of [named artist]” may be analogized to the principle of “access.”

  1. Please identify any issues not mentioned above that the Copyright Office should consider in conducting this study.

In regard to disclaiming AI-generated material in a registration application, the current guidelines are likely to confuse applicants and overburden examiners who have neither the resources, nor necessarily the expertise, to engage in assessments normally left to the courts. Although it is understandable that the Office wishes to preserve the “human authorship” doctrine, asking an applicant how a work was made is a significant shift that should not be taken lightly.

Applicants are going to submit myriad statements which are either unfounded in law, or which beg for what amounts to a “substantial similarity” test on the part of the examiner. This may strain Office resources and potentially cost applicants additional fees. Instead, if the Office were to add a checkbox at the Certification stage of the application, asking whether the deposit copy(ies) contain any AI-generated material, the applicant will be given the opportunity to make a truthful statement subject to §506(e) while leaving the question of separating the AI material from the human authorship to the courts, as it should be.

Let’s Stop Analogizing Human Creators to Machines

Just as it is folly to anthropomorphize computers and robots, it is also unhelpful to discuss the implications of generative AI in copyright law by analogizing machines to authors.[1] In 2019, I explored the idea that “machine learning” could be analogous to human reading if the human happens to have an eidetic memory. But this was a thought exercise, and in that post, I also imagined machine training that serves a computer science or research purpose—not necessarily generative AIs trained on protected works designed to produce works without authors.

In the present discussion, however, certain parties weighing in on AI and copyright seem to advocate policy that is premised on the language and principles of existing doctrine as applicable to the technological processes of both the input and output sides of the generative AI equation. Of course, policy discussions usually begin with the existing framework, but in this instance, it can be a shaky starting place because generative AI presents some unique challenges—and not just for the practice of copyright law.

We should be wary of analogizing machine functions to human activity for the simple reason that copyright law (indeed all law) has never been anything but anthropocentric. Although it is difficult to avoid speaking in terms of machines “learning” or “creating,” it is essential that we either constantly remind ourselves that these are weak, inaccurate metaphors, or that a new glossary is needed to describe what certain AIs may be doing in the world of creative production.

On the input (training) side of the equation, the moment someone says something like, “Humans learn to make art by looking at art, and generative AIs do the same thing,” the speaker should be directed to the break-out session on sci-fi and excused from any serious conversation about applicable copyright law. Likewise, on the output side, comparisons of AI to other technological developments—from the printing press to Photoshop—should be presumed irrelevant unless the AI at issue can plausibly be described as a tool of the author rather than the primary maker of a work of creative expression.

Copyright Office Guidance Highlights Some Key Difficulties

To emphasize the exceptional nature of this discussion, even experts are somewhat confused by both the doctrinal and administrative aspects in the new guidelines published by U.S. Copyright Office directing authors how to disclaim AI-generated material in a registration application. The confusion is hardly surprising because generative AI has prompted the Office to ask an unprecedented question—namely, How was this work made?

As noted in several posts, copyrightability has always been agnostic with regard to the creative process. Copyright rights attach to works that show a modicum of originality, and the Copyright Office does not generally ask what tools, methods, etc. the author used to make a work.[2] But this historic practice was then confronted by the now widely reported applications submitted by Stephen Thaler and Kris Kashtanova, both claiming copyright in visual works made with generative AI.

In both cases, the Copyright Office rejected registration applications for the visual works based on the longstanding, bright-line doctrine that copyright rights can only attach to works made by human beings. In Thaler’s case, the consideration is straightforward because the claimant affirmed that the image was produced entirely by a machine. Kashtanova, on the other hand, asserts more than di minimis authorship (i.e., using AI as a tool) to produce the visual works elements in a comic book.

Whether in response to Kashtanova—or certainly anticipating applications yet to come—the muddiness of the Office guidelines is an attempt to address the difficult question as to whether copyright attaches to a work that combines authorship and AI generation, and how to draw distinctions between the two. This is not only new territory for the Office as a doctrinal matter but is a potential mess as an administrative one.

The Copyright Office has never been tasked with separating the protectable expression attributable to a human from the unprotectable expression attributable to a machine. Even if it could be said that photography has always provoked this tension (a discussion on its own), the analysis has never been an issue for the Office when registering works, but only for the courts in resolving claims of infringement. In fact, Warhol v. Goldsmith, although before SCOTUS as fair use case, is a prime example of how tricky it can be to separate the factual elements of a photograph from the expressive elements.

But now the Copyright Office is potentially tasked with a copyrightability question that, in practice, would ask both the author and the examiner to engage in a version of the idea/expression dichotomy analysis—first separating the machine generated material from the author’s material and then considering whether the author has a valid claim in the protectable expression.

This is not so easy to accomplish in a work that combines author and machine-made elements in a manner that may be subtly intertwined; it begs new questions about what the AI “contributed” to a given work; and the inquiry is further complicated by the variety of AI tools in the market or in development. Then, because neither the author/claimant nor the Office examiner is likely a copyright attorney (let alone a court), the inquiry is fraught with difficulty as an administrative process—and that’s if the author makes a good-faith effort to disclaim the AI-generated material in the first place.

Many independent authors are confused enough by the Limit of Claim in a registration application or the concept of “published” versus “unpublished.” Asking these same creators to delve into the metaphysics implied by the AI/Author distinction seems like a dubious enterprise, and one that is not likely to foster more faith in the copyright system than the average indie creator has right now.

Copyrightability Could Remain Blind But …

It is understandable that some creators (e.g., filmmakers using certain plug-ins) may be concerned that the Copyright Office has already taken too broad a view—connoting a per se rule that denies copyrightability for any work generated with any AI technology. This concern is a reminder that AI should not be discussed as a monolithic topic because not all AI enhanced products do the same thing. And again, this may imply a need for some new terms rather than the words we use to describe human activities.

In this light, one could follow a different line of reasoning and argue that the agnosticism of copyrightability vis-à-vis process has always implied a presumption of human authorship where other factors—from technological enhancements to dumb luck—invisibly contribute to the protectable expression. Relatedly, a photographer can add a filter or plug-in that changes the expressive qualities of her image, but doing so is considered part of the selection and arrangement aspect of her authorship and does not dilute the copyrightability of the image.

Some extraordinary visual work has already been produced by professional artists using AI to yield results that are too strikingly well-crafted to believe that the author has not exerted considerable influence over the final image. In this regard, then, perhaps the copyrightability question at the registration stage, no matter how sophisticated the “filter” becomes, should remain blind to process. The Copyright Office could continue to register works submitted by valid claimants without asking the novel How question.

But the more that works may be generated with little or no human spark, the more this agnostic, status-quo approach could unravel the foundation of copyright rights altogether. And it would not be the first time that major tech companies have sought to do exactly that. It is no surprise that an AI developer or a producer using AI would seek the financial benefits of copyright protection; but without a defensible presence of human expression in the work, the exclusive rights of copyright cannot vest in a person with the standing to defend those rights. Nowhere in U.S. law do non-humans have rights of any kind, and this foundational principle reminds us that although machine activity can be compared to human activity as an allegorical construct, this is too whimsical for a serious policy discussion.

Again, I highlight this tangle of administrative and doctrinal factors to emphasize the point that generative AI does not merely present new variations on old questions (e.g., photography), but raises novel questions that cannot easily be answered by analogies to the past. If the challenges presented by generative AI are to be resolved sensibly, and in a way that will serve independent creators, policymakers and thought leaders on copyright law should be skeptical of arguments that too earnestly attempt to transpose centuries of doctrine for human activity into principles applied to machine activity.


[1] I do not distinguish “human” authors, because there is no other kind.

[2] I say “generally” only because I cannot account for every conversation among claimants and examiners.

Image by boom15th931