While people may continue to debate whether robots dream of electric sheep, let us please stop entertaining the notion that AIs “learn from artistic works the same way human artists learn” to make art. In a recent article solidly arguing that Big Tech is going to win again for exploiting creators to develop AI, Peter Csathy concludes:
For those of you who push back and argue that humans “train” on pre-existing copyrighted works all the time when they create works inspired by (or even “in the style of”) of others, let’s be clear. They typically aren’t plagiarizing or making actual copies. But generative AI is when it “scrapes” each and every word.
Csathy is right, of course, but even his counterargument still accepts the premise of the analogy. And that’s part of the problem. Because the analogy is dumb and should be rejected as dumb, or at least useless in the broader discussion about machine learning and generative AI. The comparison of AI “training” to human artistic “training” fosters a legal, moral, and cultural equivalency that should be dismissed with prejudice, if only because whatever we call the product of generative AI, it ain’t art.
A child finds a shell on the beach she thinks is pretty. She takes the shell home, cleans it off, and places it on a nightstand or other surface to decorate her room. The shell is fun to look at, and its texture, shape, and color inspire the child to hold it in her hand, study it for long periods of time, and perhaps even make new discoveries about it. The shell shares many qualities with art, but it is not art for the simple fact that no human made the object. Likewise, autonomously, AI-generated works are just pretty sea shells on the beach.
The essential anthropic contribution to artistic expression is not merely a doctrinal principle of copyright law (i.e., one cannot own rights in the “works” of nature), but it is axiomatic to the nature of art as both practice and experience. Whether good or bad, high or low, decorative or provocative, commercial or non-commercial, art, by definition, is made by humans. In fact, it is the only enterprise I can think of—other than religion—that entails an instinct or acceptance that something ineffable and profound is inherent.
Art is talismanic much like an autograph, rare book, or historic artifact. The value of an original Van Gogh is not merely underwritten by its uniqueness but by a metaphysical—perhaps even spiritual—sense that the canvas, paint, and expression are all imbued with eidolons of the artist and his place in the human continuum. The instinct to perceive meaning in objects or to form personal relationships with works of expression may be ineffable, but the phenomenon cannot be denied any more than the element of faith can rationally be stripped from religious ritual. With a little practice, I could correctly perform a religious rite, but because I’m an atheist, it would be a meaningless act. An observer might not know, but I would, and so (according to the faithful) would God. Likewise, “art” without the undefinable ingredient (call it what you will) is as empty as a prayer without faith.
Whether readers agree with any of this, perhaps it is enough to simply understand that artists do not merely “learn” to make art by studying the mechanics of prior art. Yes, that is often part of the artist’s education but not necessarily the most important part. And many artists are autodidacts without any kind of formal training. But whatever training, methods, or media may be cited to describe the journey toward art-making, what the artist fundamentally does is synthesize experience into expressive works that both comment upon and alter human experience. And since AI’s can’t have human experience, they really can’t learn shit about art.
Image source by: ipopba
Leave a Reply