Richard Prince “New Portraits” Show Was a Big Fair Use Error

Yesterday, New York federal judge Sidney Stein ruled that Richard Prince, one of the most famous appropriation artists in the world, infringed the copyright rights of photographers Donald Graham and Eric McNatt by using their works in the controversial “New Portraits” series. Prince and his co-defendant, gallery owner Lawrence Gagosian, are ordered to pay Graham and McNatt five times the sale price of Prince’s infringing canvases, plus unspecified expenses. Further, Prince is “enjoined from reproducing, modifying, preparing derivative works from, displaying publicly, selling, offering to sell, or otherwise distributing the” photographs belonging to Graham and McNatt.

The “New Portraits” canvasses sold for prices ranging between about $40,000 and $150,000, indicating that the combined awards will be substantially higher than maximum statutory damages in an outcome that highlights the significance of the Supreme Court decision in Andy Warhol Foundation v. Lynn Goldsmith. I think it’s fair to say that the over-expansive interpretation of “transformativeness” under the fair use factor one analysis is now settled, and independent creators—perhaps especially those who are not celebrities—will benefit as a result.

The “New Portraits” series stirred outrage in the Fall of 2014, when Prince and New York’s Madison Gallery first exhibited the 5’ x 6’ canvasses, the hearts of which were made by copying images that both amateurs and professionals had posted to Instagram. Subsequently, the show moved to the Gagosian Gallery, where the “Instagram series” continued to make headlines with the Prince canvasses selling to collectors for prices many found shocking considering that nearly the entire work being sold was somebody else’s photograph.

Opinions vary about the “New Portraits” series as an artistic statement, but the question of copyright infringement vs. fair use became clearer on May 11, 2023, when Judge Stein denied Prince’s motion for summary judgment (MSJ), and then the matter became even sharper about a week later with the Warhol decision. As the district court stated in May:

A close comparison reveals that Prince enlarged the images when he printed them onto the canvases, cropped portions of the photographs, added the Instagram frame, and included his own comments. But these alterations do not begin to approach those found to be transformative as a matter of law by the Second Circuit.

Even before Warhol, the district court found unpersuasive Prince’s shifting theories as to why his use was transformative. Arguing at the outset that the purpose of “New Portraits” was just “art and fun,” Prince later tried to hone his defense, averring that the series was a comment about social media and culture. Indeed, that commentary was present in the show—I said as much when the story was new—but that kind of commentary does not make the uses at issue fair uses.

As the district court stated in denying Prince’s MSJ, and then SCOTUS affirmed resoundingly in Warhol, the use of a protected work must express some “critical bearing” on the work used. With that clear finding in a Supreme Court case so obviously analogous to the “New Portraits” case, Prince could not have expected to prevail had he proceeded to trial. His canvasses titled Portrait of Rastajay 92 and Portrait of Kim Gordon express no comment of any kind about Graham’s Rastafarian Smoking a Joint or McNatt’s Kim Gordon 1 respectively.

“Phony fraud photographers keep mooching me. Why? I changed the game.”

Tweet by Richard Prince, 2017

Photographers everywhere will celebrate this outcome, not only as a validation of their copyright rights, but also because Richard Prince himself is hardly modest about his appropriations or his presumed right to make them. Amid a 2016 tweet storm over the use of Graham’s photograph, Prince wrote defiantly:  “U want fame? Take mine. Only thing that counts is good art. All the everything else is bullshit.” To this, an art critic friend Jerry Saltz added, “Amen. These litigious ‘artists’ ‘photographers’ are so middle-class conservative it shivers the timbers. Neo-know-nothings.” Well, timbers shivered, I guess. Turns out that knowing nothing about fair use can be costly.

On that point, it is highly significant that this case does not end in a confidential settlement, in which the plaintiffs would ordinarily receive more money. According to plaintiff’s attorney David Marriott of Cravath, Swaine & Moore, Graham and McNatt both wanted a public ruling by the court to send a message to the creative community that what Prince had done was categorically not allowed under the fair use exception.

Graham’s “Untimely” Registration

Of note, Donald Graham’s resolution in this case is another example of the importance of timely registration with the U.S. Copyright Office. At the time Prince first exhibited Rastafarian Smoking a Joint, the photograph, created in 1996, was not registered. This could have barred Graham a path to federal litigation, or at least deny him access to statutory damages, which would require that he prove actual damages (i.e., loss of income). Graham registered the photograph in October after the Madison show went up that September. That was too late to effectively litigate the original infringement, but Prince and Gagosian subsequently made infringing use of Rastafarian by producing a billboard and art book, which together violated Graham’s rights of reproduction, display, and distribution.

Warhol Reins in Prince

As discussed in other posts, there are aspects of the Warhol case that remain food for thought, if not litigation—namely the unanswered, substantial similarity question as to whether Warhol copied the protectable expression in the Goldsmith photograph. But the importance of that decision was that it resharpened the contours of the transformative consideration after many years in which defendants have tried to present vague and overbroad definitions that would deprive the fair use doctrine of all meaning.

Richard Prince’s attempt to fit the “New Portraits” project into a transformative finding was a classic and high-profile example of pushing the boundary of the fair use doctrine beyond reason. And it is hard to miss the cultural significance of the Court’s posthumous check on Andy Warhol ultimately tempering the hubris of Richard Prince. Both artists benefitted substantially from the metaphysics of merely attaching their names to works, including creative expressions they did not really make. As such, the judgment in Graham and McNatt’s favor is a satisfying punctuation to this saga worthy of a toast. Cheers!

“Fair Use” is Not a Great Business Plan

Lately, we’ve seen several headlines and comments from tech giants say that AI ventures simply cannot succeed if they are forced to contend with the copyrights in the billions of works they have scraped for the purpose of machine learning (ML). When these headlines are paired with the rampant assertions that ML is inherently fair use—a subject addressed in last Wednesday’s Senate Judiciary Committee (SJC) hearing on AI and journalism—one has to wonder about the business decisions being made before generative AI exploded last year.

In many posts on this blog, including at least a few written during “Fair Use Week,” I have repeated the caveat that “fair use” is not a magic phrase that makes infringement claims disappear. Usually, that advice is directed at small and independent users of works, suggesting they not listen to Big Tech and its network of academics and activists, who will not be on the hook for the small guy’s copyright infringement. I always assumed the big guys knew better, that they were merely chanting the “fair use” mantra as a rhetorical device in the blogosphere to promote the anti-copyright agenda. But maybe they don’t know better.

If I were an AI investor asking about potential liability, and the founders told me, “Don’t worry, what we’re doing is fair use,” my immediate response would be to ask whether there is sufficient funding for major litigation, to say nothing of predicting the outcome of that litigation. Because simply put, the party who conjures the term “fair use” has effectively assumed that a potential liability for copyright infringement exists. And if that assumption is a bad business decision, then that’s the founders’ problem, not a flaw in copyright law.

No matter what the critics say, or how hard certain academics try to alter its meaning, the courts are clear that fair use is an affirmative defense to a claim of copyright infringement, which means that building a business venture on an assumption of fair use is tantamount to assuming that lawsuits are coming. And if it’s a multi-billion-dollar venture that potentially infringes millions of works owned by major corporations, then the lawsuits are going to be big—perhaps even existential.

Do Not Expect Congress to Change Fair Use in Any Direction

Notably, as reported in Wired, Conde Nast CEO Roger Lynch stated at one point during questioning by the SJC last week, “If Congress could clarify that the use of our content, or other publisher content, for the training and output of AI models is not fair use, then the free market will take care of the rest,” to which Sen. Hawley replied that this seems reasonable. But I wonder about this exchange. While it is encouraging to find the senators more sympathetic with the news organizations than with the AI developers, I doubt (and would not even hope) that Congress is going to amend the law to explicitly state that ML is categorically never fair use.

Fair use comprises a history of judge-made law that was codified into statute as Section 107 of the 1976 revision of the U.S. Copyright Act. But the statute does not draw bright lines stating that X is always fair use and Y is never fair use, and for good reason. Because justice for all parties is best served by a court weighing the specific facts of a specific use of a specific work, or body of works. Hence, an attorney will tell you that fair use is a “fact intensive” consideration.

If Congress were to explicitly declare, for instance, that ML can never be fair use, this would be a significant departure from doctrine, and one that is preemptively unjust to the potential AI developer with a fact pattern that would favor a finding of fair use. As much as I find the major generative AI companies to be some combination of arrogant and/or useless, and as much as I scorn their generalizations to-date about fair use, it would be wrong to endorse legislative revision of the fair use doctrine as a sound response.

In fact, if the court were to find fair use for ML in New York Times v. Open AI (and I doubt it will), and Congress sought to remedy that outcome, it would still not make sense to amend Section 107. If anything, news organizations and other copyright owners would likely seek a new section of the Copyright Act tailored to the nature of the new form of harm, which Big Tech would then blindly oppose with every available resource. For instance, it is possible that the Times would not currently be suing Open AI if the tech industry had not opposed the Journalism Competition and Preservation Act (JCPA), which would have temporarily exempted news organizations from antitrust barriers to collective bargaining for licensing their content.

Regardless, no party should be asking Congress to “clarify fair use” in response to AI. If the AI founders and investors made a bad bet on an ultimate finding of fair use, that’s tough noogies for them. But neither should content creators want Congress to open that particular can of worms and disturb the fair use case law. Of course, where Congress should intervene is to address harms caused by AI where no law currently applies. On that subject, the next post discusses the recently proposed No AI FRAUD Act.


Phot source by areporter.

The Generative AI Fair Use Defense Under Google Books

After the Supreme Court’s decision in AWF v. Goldsmith restored what many of us view as common sense to the fair use doctrine of transformativeness, the flurry of litigation against AI developers will test the same principle in a different light. As discussed on this blog and elsewhere, caselaw has produced two frameworks for considering whether the “purpose and character” of a use is transformative. One focuses on differences in expressive elements, like the use of Goldsmith’s photograph to make Warhol’s silkscreen; and the other considers a use made for a unique purpose, like the millions of scanned books used to produce the Google Books search tool.

In Warhol, the Court affirmed that transformative expression must contain some element of “critical bearing” (i.e., comment) upon the work(s) used, and this concept, tied to the different character of work, is distinguished from the use of copyrightable works to create a tool or product that may be considered transformative because it is novel and beneficial for society. Notwithstanding the possibility that generative AI may prove to be harmful to society, the copyright question of the moment is whether the use of many millions of protected works to “train” these models is transformative under the same reasoning applied in Authors Guild v. Google Books (2015).

Because the Google Books search tool could only be developed by inputting millions of digitized books into the database, the argument being made is that this is obviously analogous to ingesting millions of protected works for AI training. And certainly, no one could doubt that generative AIs are novel, even revolutionary. But this may be where the comparisons end under the fair use factor one, which considers the purpose of a use, inherent to which is a “justification for the taking.”[1]

The factor one decision in Google Books turns substantially on the court’s finding that the search tool provides information about the works used. “…Google’s claim of transformative purpose for copying from the works of others is to provide otherwise unavailable information about the originals,” the opinion states. While Google Books “test[ed] the boundaries of fair use,” the court held that the search tool furthered the interests of copyright law by providing various new ways to research the contents of books that would otherwise be impossible. Although unstated (because it would have been absurd), the recipients of the information provided by Google Books were/are human beings. And especially if some of those human beings use the information obtained to produce and/or engage with expressive works, the finding of fair use fulfills copyright’s constitutional purpose to “promote progress.”

Generative AI developers may try to argue that the use of creative works for training serves an “informational” purpose, but unlike Google Books, the information obtained from the ingested works only “informs” the machine itself. A generative AI does not, for instance, provide the human user with new ways to learn about Renaissance painting (or point to Renaissance works) but instead trains itself how to make images that look like works from the Renaissance.[2] Setting aside the cultural debate about the value of such tools, the purpose of the generative AI is clearly distinguishable from the reasoning applied in Google Books.

As discussed in an earlier post, a consideration of AI under fair use should turn on the question of promoting “authorship,” lest the courts become distracted by the broadly innovative nature of these systems—especially for any purpose outside the scope of copyright.[3] In that post, I argued that generative AIs do not promote “authorship,” and I would die on that hill, if the developers’ expectation is that these tools will autonomously generate “creative” works without any human involvement.

For instance, if “singer/songwriter” Anna Indiana is a primitive example of what’s to come—and my understanding is that this is exactly what the AI models are designed to do—then the “purpose” of these systems is not to promote authorship, but to obliterate authorship by removing humans from the “creative” process. As such, the fair use defense cannot apply because without the element of authorship, the consideration is no longer a copyright matter.

On the other hand, as stated in my comments to the Copyright Office, it is conceivable that a human author might “collaborate” with an AI tool to produce a work that meets the “authorship” threshold. For instance, by using a set of prompts that articulate sufficient creative choices in the production of a visual work (or by uploading one’s own work and using an AI tool to modify it), one can make a reasonable argument that this constitutes “authorship” under copyright law. This is one potential purpose of generative AI, and one which could favor a finding of transformativeness under similar principles articulated in Google Books.

But Google Books did not present the court with so many unknown, relevant questions of fact.

The purpose of the Google Books search tool was clearly defined and fully developed when that case was decided in 2015. By contrast, fair use defenses of AI today are presented on behalf of technologies whose development is nascent and exponentially dynamic. Simply put, we do not know yet whether a particular generative AI will promote authorship or become a substitute for authorship—the former being favorable to a finding of fair use, the latter being fatal to such a finding. Here, proponents may argue that so long as there is a mix of uses, resulting in both authored and un-authored outputs, this is sufficient to find the purpose of a given AI transformative, but it seems likely that the current docket of cases will be decided before enough determinative facts can be known.

For now, it is worth remembering that sweeping statements alleging that generative AI training is “inherently fair use” are anathema to a doctrine that rejects such generalizations. Fair use remains a fact-intensive, case-by-case consideration, and one of the many difficulties with AI is that relevant facts are not only evolving, but they describe technologies unlike anything that has been examined under the fair use doctrine to date.


[1] Citing Campbell, informing both Google Books and Warhol.

[2] I recognize that this is an oversimplification of what the AI can do.

[3] i.e., AI’s potential applications in areas like medicine or security should be dismissed as irrelevant to a fair use consideration of generative AIs that make “creative” works.

Photo by: chepkoelena531