Things We Don’t Need: Generative AI

When I was planning to start The Illusion of More, I contemplated a category of posts under the heading We Don’t Need This. Although abandoned, I thought it might be an editorial framework for articles about innovations that really aren’t innovative, and the low-tech invention that originally inspired the idea was the kiddie-car/shopping-cart hybrid. In case you haven’t had the pleasure, this vehicle enables a small child to “drive” a plastic car attached to the basket one pushes through the supermarket. As the parent of a small child (at the time IOM was launched), I found this innovation was a terrible idea—one that demanded use the moment the child laid eyes upon it, but which mostly offered poor maneuverability through the aisles and unnecessary geometric struggle at check-out.

There is, of course, nothing connecting the kiddie-car/shopping-cart to generative AI except, in my view, the fact that we don’t need either one. Or at least, we don’t need most of what generative AI appears to be doing, and this is perhaps the most maddening aspect of the most prominent generative AI tools making the headlines—that they serve no purpose and, if we’re getting all IP about it, promote no progress. I’ve said it, and I’ll keep saying it:  we do not need computers to make artistic works.

This month, the Federal Trade Commission (FTC) issued a report describing its early findings about AI’s potential harms which may be addressable under the agency’s purview. Charged with enforcing prohibitions against unfair, non-competitive business practices and protecting consumers, the FTC hosted a roundtable discussion with members of the creative community to hear their concerns about both the development and public deployment of generative AIs. As the report states:

Various competition and consumer protection concerns may arise when AI is deployed in the creative professions. Conduct–such as training an AI tool on protected expression without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness—may constitute an unfair method of competition or an unfair or deceptive practice.

In response to the report—specifically to the passage quoted above—three well-known copyright critics, Pamela Samuelson, Matthew Sag, and Christopher Sprigman (SS&S) criticized the FTC “both for its opacity and for the ways in which it may be interpreted (or misinterpreted) to chill innovation and restrict competition in the markets for AI technologies.” Before responding to that allegation, I must indulge in a little gallows humor and mention that the economic and global-security leader of the free world is in danger of shredding its Constitution, going full-tilt authoritarian, and spiraling into a deathroll of ignorance and cruelty. And yet, we’re going to talk about “chilling innovation” in generative AI as if it’s a matter of urgency. The world is in crisis, and billions have been invested to see who can do the best job getting a computer to write a poem or make a picture? Talk about whimpers instead of bangs.

There are two reasons that sentiment is not raw Ludditism. The first is that it does not dismiss all AI development in the creative industry as useless; and the second is that the “copyright stifles innovation” bullet point is a generalization that should never be uttered again—especially in light of its direct role in fostering the above-mentioned prospect of democracy’s collapse. We’ve heard all this before—specifically from SS&S and their colleagues in academia and the “digital rights” organizations. We’ve been told that copyright stifles the free and open internet, access to information, and the speech right.

But in addition to the fact that the premise itself was false, the grand social media experiment in the “democratization of everything” must be recognized as an abysmal failure, and its cheerleaders should muster the humility to stifle their tiresome and dangerous refrains in context to AI. Social media companies and their friends in academia—and here, I must include President Obama’s Google-friendly administration—share considerable blame for the heedless, tech-enabled populism that has fostered so many social hazards, including a literal seditionist now leading one of America’s two political parties.

Notably, the FTC report does not mention copyright very much, and in fact, many of the creative professionals who participated in the discussions acknowledged that because they are not copyright owners (e.g., voice actors and screenwriters for hire were among the representatives), they do not have rights currently protecting them against generative AI resulting in the kind of unfair outcomes, which the FTC is charged with mitigating. It would take too long a post to respond to all the critiques presented by SS&S, but I wanted to focus on this statement:

We are concerned especially about the suggestion in the FTC’s Comments that AI training might be a Section 5 violation where it “diminishes the value of [a creator’s] existing or future works.” A hallmark of competition is that it diminishes the returns that producers are likely to garner relative to a less competitive marketplace. This is just as likely to be true in markets for creative goods, such as novels and paintings, as it is in markets for ordinary tangible goods like automobiles and groceries. AI agents that produce outputs that are not substantially similar to any work on which the AI agent was trained, and are thus not infringing on any particular copyright owner’s rights, are lawful competition for the works on which they are trained.  Surely the FTC does not plan to have Section 5 displace the judgments of copyright law on what is and what is not lawful competition?

To summarize, that paragraph declares that it does not matter if generative AI displaces human authors, that in fact, it is a threshold we should be eager to cross. Notwithstanding the fact that two of the high-profile lawsuits present compelling evidence of substantially similar outputs,[1] the more concerning implication of that paragraph is that SS&S endorse the inevitability that generative AI will devalue human creators and/or eliminate them altogether. Moreover, calling this eventuality a form of “competition” reveals an unsettling perspective consistent with every anti-copyright paper I have ever read—namely, that the production of creative works is no different than the production of any other product or service.

I’ve said many times that copyright critics don’t understand artists, and here, the inapt word competition demonstrates why this axiom endures. For instance, publishers are in competition with one another to an extent, but authors are not—at least not in the sense that the concept applies in other industries—least of all Big Tech. No novelist, for instance, wants to hold the undivided and exclusive attention of all readers the way Meta wants eyeballs never to stray for long from its platforms. Artists thrive in a diverse market of other artists, consumers benefit as a result, and copyright is an engine of that diversity, not a barrier to it. Artists may feel competitive or jealous at times, or even behave in a competitive manner (because they’re human), but the reality is that they need one another to exist at a scale that is not comparable to other “businesses.” True to form, copyright critics like to cite the interdependence of authors to highlight copyright’s limitations but then ignore the same principle in support of tech giants swallowing all creative enterprise whole.

The primary concern expressed by SS&S appears to be that the FTC alleges that AI training with copyrighted works is an act of infringement. Unsurprisingly, this same trio submitted comments to the Copyright Office arguing that AI training with protected works is fair use, but as that very question is already presented in several court cases, I assume SS&S are primarily concerned with optics here. The trio states, “The FTC has no authority to determine what is and what is not copyright infringement, or what is or is not fair use. Under governing law, that is a judicial function.”

Exactly. And the question is now before the courts. So, what’s the problem? That the FTC should not even raise the issue? According to tweets by Samuelson and Sprigman, they argue that the FTC’s report is one-sided, that it is too creator-focused and does not account for the testimony or opinions of the technology companies developing AI. But while I certainly agree that multistakeholder hearings etc. are the proper approach to developing new policy, it is impossible to tolerate a complaint about lack of balance coming from the anti-copyright crowd at all, and from these individuals in particular. For instance, readers may not remember the American Law Institute Restatement of Copyright, initiated by Samuelson and led by Sprigman, but critics of the project—some of the most prominent names in copyright scholarship—specifically cite the opacity of the restatement process and deafness of its managers to the concerns and recommendations of their colleagues.

More broadly, it must be said that if, indeed, the FTC lately gave more attention to the creators than they did to the tech companies, then this was a long overdue anomaly. Between at least the mid-late 1990s and 2016, the tech companies were treated with kid gloves, handed the keys to Washington, and feted like the economic and democratic engines they claimed to be. Since 2016, sentiment began to swing in the other direction, as many Americans began to see how disinformation plus data manipulation can become a wrecking ball for a whole society.

If Big Tech lost the previously undeserved benefit of the doubt, good. AI has the potential to exacerbate many of the same Web 2.0 harms at unprecedented speed and scale, and if the FTC, the USCO, the courts, or Congress look askance at the developers, then it is a mistrust well earned. And again, at least with regard to generative AI designed to make creative works, none of the parties empowered to write policy in this area should forget the bottom line:  that when it comes to producing creative work, we truly do not need generative AI.


[1] Concord et al. v. Anthropic and NYT v. Open AI, et al

SEE ALSO: The Washington Post reported this month that Big Tech continues to significantly fund and influence academia in these policy areas.

Photo by: Jollier

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)