Things We Don’t Need: Generative AI

When I was planning to start The Illusion of More, I contemplated a category of posts under the heading We Don’t Need This. Although abandoned, I thought it might be an editorial framework for articles about innovations that really aren’t innovative, and the low-tech invention that originally inspired the idea was the kiddie-car/shopping-cart hybrid. In case you haven’t had the pleasure, this vehicle enables a small child to “drive” a plastic car attached to the basket one pushes through the supermarket. As the parent of a small child (at the time IOM was launched), I found this innovation was a terrible idea—one that demanded use the moment the child laid eyes upon it, but which mostly offered poor maneuverability through the aisles and unnecessary geometric struggle at check-out.

There is, of course, nothing connecting the kiddie-car/shopping-cart to generative AI except, in my view, the fact that we don’t need either one. Or at least, we don’t need most of what generative AI appears to be doing, and this is perhaps the most maddening aspect of the most prominent generative AI tools making the headlines—that they serve no purpose and, if we’re getting all IP about it, promote no progress. I’ve said it, and I’ll keep saying it:  we do not need computers to make artistic works.

This month, the Federal Trade Commission (FTC) issued a report describing its early findings about AI’s potential harms which may be addressable under the agency’s purview. Charged with enforcing prohibitions against unfair, non-competitive business practices and protecting consumers, the FTC hosted a roundtable discussion with members of the creative community to hear their concerns about both the development and public deployment of generative AIs. As the report states:

Various competition and consumer protection concerns may arise when AI is deployed in the creative professions. Conduct–such as training an AI tool on protected expression without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness—may constitute an unfair method of competition or an unfair or deceptive practice.

In response to the report—specifically to the passage quoted above—three well-known copyright critics, Pamela Samuelson, Matthew Sag, and Christopher Sprigman (SS&S) criticized the FTC “both for its opacity and for the ways in which it may be interpreted (or misinterpreted) to chill innovation and restrict competition in the markets for AI technologies.” Before responding to that allegation, I must indulge in a little gallows humor and mention that the economic and global-security leader of the free world is in danger of shredding its Constitution, going full-tilt authoritarian, and spiraling into a deathroll of ignorance and cruelty. And yet, we’re going to talk about “chilling innovation” in generative AI as if it’s a matter of urgency. The world is in crisis, and billions have been invested to see who can do the best job getting a computer to write a poem or make a picture? Talk about whimpers instead of bangs.

There are two reasons that sentiment is not raw Ludditism. The first is that it does not dismiss all AI development in the creative industry as useless; and the second is that the “copyright stifles innovation” bullet point is a generalization that should never be uttered again—especially in light of its direct role in fostering the above-mentioned prospect of democracy’s collapse. We’ve heard all this before—specifically from SS&S and their colleagues in academia and the “digital rights” organizations. We’ve been told that copyright stifles the free and open internet, access to information, and the speech right.

But in addition to the fact that the premise itself was false, the grand social media experiment in the “democratization of everything” must be recognized as an abysmal failure, and its cheerleaders should muster the humility to stifle their tiresome and dangerous refrains in context to AI. Social media companies and their friends in academia—and here, I must include President Obama’s Google-friendly administration—share considerable blame for the heedless, tech-enabled populism that has fostered so many social hazards, including a literal seditionist now leading one of America’s two political parties.

Notably, the FTC report does not mention copyright very much, and in fact, many of the creative professionals who participated in the discussions acknowledged that because they are not copyright owners (e.g., voice actors and screenwriters for hire were among the representatives), they do not have rights currently protecting them against generative AI resulting in the kind of unfair outcomes, which the FTC is charged with mitigating. It would take too long a post to respond to all the critiques presented by SS&S, but I wanted to focus on this statement:

We are concerned especially about the suggestion in the FTC’s Comments that AI training might be a Section 5 violation where it “diminishes the value of [a creator’s] existing or future works.” A hallmark of competition is that it diminishes the returns that producers are likely to garner relative to a less competitive marketplace. This is just as likely to be true in markets for creative goods, such as novels and paintings, as it is in markets for ordinary tangible goods like automobiles and groceries. AI agents that produce outputs that are not substantially similar to any work on which the AI agent was trained, and are thus not infringing on any particular copyright owner’s rights, are lawful competition for the works on which they are trained.  Surely the FTC does not plan to have Section 5 displace the judgments of copyright law on what is and what is not lawful competition?

To summarize, that paragraph declares that it does not matter if generative AI displaces human authors, that in fact, it is a threshold we should be eager to cross. Notwithstanding the fact that two of the high-profile lawsuits present compelling evidence of substantially similar outputs,[1] the more concerning implication of that paragraph is that SS&S endorse the inevitability that generative AI will devalue human creators and/or eliminate them altogether. Moreover, calling this eventuality a form of “competition” reveals an unsettling perspective consistent with every anti-copyright paper I have ever read—namely, that the production of creative works is no different than the production of any other product or service.

I’ve said many times that copyright critics don’t understand artists, and here, the inapt word competition demonstrates why this axiom endures. For instance, publishers are in competition with one another to an extent, but authors are not—at least not in the sense that the concept applies in other industries—least of all Big Tech. No novelist, for instance, wants to hold the undivided and exclusive attention of all readers the way Meta wants eyeballs never to stray for long from its platforms. Artists thrive in a diverse market of other artists, consumers benefit as a result, and copyright is an engine of that diversity, not a barrier to it. Artists may feel competitive or jealous at times, or even behave in a competitive manner (because they’re human), but the reality is that they need one another to exist at a scale that is not comparable to other “businesses.” True to form, copyright critics like to cite the interdependence of authors to highlight copyright’s limitations but then ignore the same principle in support of tech giants swallowing all creative enterprise whole.

The primary concern expressed by SS&S appears to be that the FTC alleges that AI training with copyrighted works is an act of infringement. Unsurprisingly, this same trio submitted comments to the Copyright Office arguing that AI training with protected works is fair use, but as that very question is already presented in several court cases, I assume SS&S are primarily concerned with optics here. The trio states, “The FTC has no authority to determine what is and what is not copyright infringement, or what is or is not fair use. Under governing law, that is a judicial function.”

Exactly. And the question is now before the courts. So, what’s the problem? That the FTC should not even raise the issue? According to tweets by Samuelson and Sprigman, they argue that the FTC’s report is one-sided, that it is too creator-focused and does not account for the testimony or opinions of the technology companies developing AI. But while I certainly agree that multistakeholder hearings etc. are the proper approach to developing new policy, it is impossible to tolerate a complaint about lack of balance coming from the anti-copyright crowd at all, and from these individuals in particular. For instance, readers may not remember the American Law Institute Restatement of Copyright, initiated by Samuelson and led by Sprigman, but critics of the project—some of the most prominent names in copyright scholarship—specifically cite the opacity of the restatement process and deafness of its managers to the concerns and recommendations of their colleagues.

More broadly, it must be said that if, indeed, the FTC lately gave more attention to the creators than they did to the tech companies, then this was a long overdue anomaly. Between at least the mid-late 1990s and 2016, the tech companies were treated with kid gloves, handed the keys to Washington, and feted like the economic and democratic engines they claimed to be. Since 2016, sentiment began to swing in the other direction, as many Americans began to see how disinformation plus data manipulation can become a wrecking ball for a whole society.

If Big Tech lost the previously undeserved benefit of the doubt, good. AI has the potential to exacerbate many of the same Web 2.0 harms at unprecedented speed and scale, and if the FTC, the USCO, the courts, or Congress look askance at the developers, then it is a mistrust well earned. And again, at least with regard to generative AI designed to make creative works, none of the parties empowered to write policy in this area should forget the bottom line:  that when it comes to producing creative work, we truly do not need generative AI.


[1] Concord et al. v. Anthropic and NYT v. Open AI, et al

SEE ALSO: The Washington Post reported this month that Big Tech continues to significantly fund and influence academia in these policy areas.

Photo by: Jollier

Generative AI is a lot Like a Video Tape Recorder, No?

In my last post, I focused on the hypothetical fair use defense of generative AI under the principles articulated in the Google Books decision of 2014. In this post, I want to address another claim that has arisen—both on social media, and in comments to the Copyright Office—namely that generative AI companies should be shielded against secondary liability for copyright infringement under the “Sony Safe Harbor.”

This refers to the 1984 Supreme Court decision in Sony v. Universal (The “Sony Betamax” Case), holding that the video tape recorder (VTR) is legal based on two interrelated findings: 1) the fair use opinion that consumers had a right to “time-shift” the viewing of televised material; and 2) therefore, the VTR would be used for substantially non-infringing purposes. Thus, although some parties would inevitably use the VTR for infringing purposes, Sony Corporation could not be liable for contributory infringement in such instances.

Clearly, there are some bright, shining distinctions between the VTR and a generative AI. The VTR was not designed by inputting millions of AV works into a computer model, and its purpose was not to generate “new” AV works. Instead, those obsolete machines performed two very basic functions: they made videotape copies of AV material, and they displayed copies of AV material for a specific type of personal use.[1] As noted in the post about Google Books, the Court in Sony also had a fully developed product and a clearly defined purpose in the VTR. And again, this is not so with respect to understanding the purpose of a given generative AI.

I believe the novelty (and even the uncertainty) of the AIs purpose is fatal to the argument that generative AI companies are necessarily shielded by the “Sony Safe Harbor.” This is because in Sony, the anticipation of substantially non-infringing use rests on the novel “time-shifting” notion introduced into the fact-intensive fair use finding. In other words, “time-shifting” was a principle specific to the technology at issue, and no analogous concept lurks anywhere in the purpose of a given AI, let alone all AIs still in development. Imagine if Sony Corp. walked into court with a box of assembled electronic parts, declared that they’re not quite sure what the box can or will do yet (though it might distribute homemade copies into the market!), but they would really like a fair use decision and liability ruling in their favor.

Non-Infringing Use Under Different Rationales

To be clear, it is plausible—even reasonable—to expect that the majority of outputs by a generative AI are, or will be, non-infringing. In fact, I believe this is one of the pitfalls when it comes to hoping that copyright can address the presumed threat of AI outputs:  because the substantial similarity bar, finding that Work A infringes Work B, is thrown into a doctrinal tailspin. For example, when a person knowingly copies a work, this fosters a strong claim of infringement, but independent creation is a non-infringing act. And then, there are shades in between willful infringement, innocent infringement, and non-infringement, depending on the facts of a particular case.

In addition to copyright’s limiting doctrines, which allow myriad “similar” works to coexist without legal conflict, I predict that generative AI has the potential to warp the evidentiary foundations necessary to a substantial similarity test to prove infringement. If that is correct, it may be one rationale for predicting widespread non-infringing use, but it is highly distinguishable from the foundations for the “Sony Safe Harbor.” Meanwhile, the consideration of secondary liability (as with fair use) depends substantially on the purpose of the technology at issue—and that purpose remains unclear.

The mundane, mechanical VTR only potentially threatened the “making available” rights for works produced and owned by creators. This is not remotely comparable to a computer model “trained” with millions of protected works for the purpose of enabling that computer model to produce new “works.” To paraphrase my brief comments to the Copyright Office, if a particular work goes into the machine and a potentially infringing copy of that work comes out of the machine, I do not believe there is any authority which broadly shields the developer from liability.

With that example in mind, though, it is worth noting that a code-based service, unlike a physical electronic device, can be revised concurrent with delivery to the market. Thus, unlike Sony and its Betamax, the AI developer looking to limit liability for copyright infringement has the opportunity (dare we say obligation?) to make every effort to design and continually update a system to avoid copyright infringement. This may entail licensing materials used to “train” a generative AI and/or ongoing tweaking of the algorithm to avoid infringing outputs. Either way, if the developers don’t want to build these kind of safeguards for the most revolutionary tech of 2023, surely they cannot be allowed to hide behind a liability shield established in 1984 for a box now collecting dust in the attic.


[1] They also frustrated many consumers who tried to set the clocks, but that’s another matter.

Photo by: Tamer_Soliman

The Generative AI Fair Use Defense Under Google Books

After the Supreme Court’s decision in AWF v. Goldsmith restored what many of us view as common sense to the fair use doctrine of transformativeness, the flurry of litigation against AI developers will test the same principle in a different light. As discussed on this blog and elsewhere, caselaw has produced two frameworks for considering whether the “purpose and character” of a use is transformative. One focuses on differences in expressive elements, like the use of Goldsmith’s photograph to make Warhol’s silkscreen; and the other considers a use made for a unique purpose, like the millions of scanned books used to produce the Google Books search tool.

In Warhol, the Court affirmed that transformative expression must contain some element of “critical bearing” (i.e., comment) upon the work(s) used, and this concept, tied to the different character of work, is distinguished from the use of copyrightable works to create a tool or product that may be considered transformative because it is novel and beneficial for society. Notwithstanding the possibility that generative AI may prove to be harmful to society, the copyright question of the moment is whether the use of many millions of protected works to “train” these models is transformative under the same reasoning applied in Authors Guild v. Google Books (2015).

Because the Google Books search tool could only be developed by inputting millions of digitized books into the database, the argument being made is that this is obviously analogous to ingesting millions of protected works for AI training. And certainly, no one could doubt that generative AIs are novel, even revolutionary. But this may be where the comparisons end under the fair use factor one, which considers the purpose of a use, inherent to which is a “justification for the taking.”[1]

The factor one decision in Google Books turns substantially on the court’s finding that the search tool provides information about the works used. “…Google’s claim of transformative purpose for copying from the works of others is to provide otherwise unavailable information about the originals,” the opinion states. While Google Books “test[ed] the boundaries of fair use,” the court held that the search tool furthered the interests of copyright law by providing various new ways to research the contents of books that would otherwise be impossible. Although unstated (because it would have been absurd), the recipients of the information provided by Google Books were/are human beings. And especially if some of those human beings use the information obtained to produce and/or engage with expressive works, the finding of fair use fulfills copyright’s constitutional purpose to “promote progress.”

Generative AI developers may try to argue that the use of creative works for training serves an “informational” purpose, but unlike Google Books, the information obtained from the ingested works only “informs” the machine itself. A generative AI does not, for instance, provide the human user with new ways to learn about Renaissance painting (or point to Renaissance works) but instead trains itself how to make images that look like works from the Renaissance.[2] Setting aside the cultural debate about the value of such tools, the purpose of the generative AI is clearly distinguishable from the reasoning applied in Google Books.

As discussed in an earlier post, a consideration of AI under fair use should turn on the question of promoting “authorship,” lest the courts become distracted by the broadly innovative nature of these systems—especially for any purpose outside the scope of copyright.[3] In that post, I argued that generative AIs do not promote “authorship,” and I would die on that hill, if the developers’ expectation is that these tools will autonomously generate “creative” works without any human involvement.

For instance, if “singer/songwriter” Anna Indiana is a primitive example of what’s to come—and my understanding is that this is exactly what the AI models are designed to do—then the “purpose” of these systems is not to promote authorship, but to obliterate authorship by removing humans from the “creative” process. As such, the fair use defense cannot apply because without the element of authorship, the consideration is no longer a copyright matter.

On the other hand, as stated in my comments to the Copyright Office, it is conceivable that a human author might “collaborate” with an AI tool to produce a work that meets the “authorship” threshold. For instance, by using a set of prompts that articulate sufficient creative choices in the production of a visual work (or by uploading one’s own work and using an AI tool to modify it), one can make a reasonable argument that this constitutes “authorship” under copyright law. This is one potential purpose of generative AI, and one which could favor a finding of transformativeness under similar principles articulated in Google Books.

But Google Books did not present the court with so many unknown, relevant questions of fact.

The purpose of the Google Books search tool was clearly defined and fully developed when that case was decided in 2015. By contrast, fair use defenses of AI today are presented on behalf of technologies whose development is nascent and exponentially dynamic. Simply put, we do not know yet whether a particular generative AI will promote authorship or become a substitute for authorship—the former being favorable to a finding of fair use, the latter being fatal to such a finding. Here, proponents may argue that so long as there is a mix of uses, resulting in both authored and un-authored outputs, this is sufficient to find the purpose of a given AI transformative, but it seems likely that the current docket of cases will be decided before enough determinative facts can be known.

For now, it is worth remembering that sweeping statements alleging that generative AI training is “inherently fair use” are anathema to a doctrine that rejects such generalizations. Fair use remains a fact-intensive, case-by-case consideration, and one of the many difficulties with AI is that relevant facts are not only evolving, but they describe technologies unlike anything that has been examined under the fair use doctrine to date.


[1] Citing Campbell, informing both Google Books and Warhol.

[2] I recognize that this is an oversimplification of what the AI can do.

[3] i.e., AI’s potential applications in areas like medicine or security should be dismissed as irrelevant to a fair use consideration of generative AIs that make “creative” works.

Photo by: chepkoelena531