A Thousand Cuts: AI and Self-Destruction

I woke up the other day thinking about artificial intelligence (AI) in context to the Cold War and the nuclear arms race, and curiously enough, the next two articles I read about AI made arms race references. Where my pre-caffeinated mind had gone was back to the early 1980s when, as teenagers, we often asked that futile question as to why any nation needed to stockpile nuclear weapons in quantities that could destroy the world many times over.

Every generation of adolescents believes—and at times confirms—that the adults have no idea what the hell they’re doing; and watching the MADness of what often seemed like a rapturous embrace of nuclear annihilation was, perhaps, the unifying existential threat which shaped our generation’s world view. Since then, reasonable arguments have been made that nuclear stalemate has yielded an unprecedented period of relative global peace, but the underlying question remains:  Are we powerless to stop the development of new modes of self-destruction?

Of course, push-button extinction is easy to imagine and, in a way, easy to ignore. If something were to go terribly wrong, and the missiles fly, it’s game over in a matter of minutes with no timeouts left. So, it is possible to “stop worrying” if not quite “love the bomb” (h/t Strangelove); but today’s technological threats preface outcomes that are less merciful than swift obliteration. Instead, they offer a slow and seemingly inexorable decline toward the dystopias of science fiction—a future in which we are not wiped out in a flash but instead “amused to death” (h/t Postman) as we relinquish humanity itself to the exigencies of technologies that serve little or no purpose.

The first essay I read about AI, written by Anja Kaspersen and Wendell Wallach for the Carnegie Council, advocates a “reset” in ethical thinking about AI, arguing that giant technology investments are once again building systems with little consideration for their potential effect on people. “In the current AI discourse we perceive a widespread failure to appreciate why it is so important to champion human dignity. There is risk of creating a world in which meaning and value are stripped from human life,” the authors write. Later, they quote Robert Oppenheimer …

It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge, and are willing to take the consequences.

I have argued repeatedly that generative AI “art” is devoid of meaning and value and that the question posed by these technologies is not merely how they might influence copyright law, but whether they should exist at all. It may seem farfetched to contemplate banning or regulating the development of AI tech, but it should not be viewed as an outlandish proposal. If certain AI developments have the capacity to dramatically alter human existence—perhaps even erode what it means to be human—why is this any less a subject of public policy than regulating a nuclear power plant or food safety?

Of course, public policy means legislators, and it is quixotic to believe that any Congress, let alone the current one, could sensibly address AI before the industry causes havoc. At best, the tech would flood the market long before the most sincere, bipartisan efforts of lawmakers could grasp the issues; and at worst, far too many politicians have shown that they would sooner exploit these technologies for their own gain than they would seek to regulate it in the public interest. “AI applications are increasingly being developed to track and manipulate humans, whether for commercial, political, or military purposes, by all means available—including deception,” write Kaspersen and Wallach. I think it’s fair to read that as Cambridge Analytica 2.0 and to recognize that the parties who used the Beta version are still around—and many have offices on Capitol Hill.

Kaspersen and Wallach predict that we may soon discover that generative AI will have the same effect on education that “social media has had on truth.” In response, I would ask the following: In the seven years since the destructive power of social media became headline news, have those revelations significantly changed the conversation, let alone muted the cyber-libertarian dogma of the platform owners? I suspect that AI in the classroom threatens to exacerbate rather than parallel the damage done by social media to truth (i.e., reason). If social media has dulled Socratic skills with the flavors of narcissism, ChatGPT promises a future that does not remember what Socratic skills used to mean.

And that brings me to the next article I read in which Chris Gillard and Pete Rorabaugh, writing for Slate, use “arms race” as a metaphor to criticize technological responses to the prospect of students cheating with AI systems like ChatGPT. Their article begins:

In the classroom of the future—if there still are any—it’s easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day’s lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain. Loop complete; no humans needed. If you were to take all the hype about ChatGPT at face value, this might feel inevitable. It’s not.

In what I feared might be another tech-apologist piece labeling concern about AI a “moral panic,” Gillard and Rorabaugh make the opposite point. Their criticism of software solutions to mitigate student cheating is that it is small thinking which erroneously accepts as a fait accompli that these AI systems are here to stay whether we like it or not. “Telling us that resistance to a particular technology is futile is a favorite talking point for technologists who release systems with few if any guardrails out into the world and then put the onus on society to address most of the problems that arise,” they write.

In other words, here we go again. The ethical, and perhaps legal, challenges posed by AI are an extension of the same conversation we generally failed to have about social media and its cheery promises to be an engine of democracy. “It’s a failure of imagination to think that we must learn to live with an A.I. writing tool just because it was built,” Gillard and Rorabaugh argue. I would like to agree but am skeptical that the imagination required to reject certain technologies exists outside the rooms where ethicists gather. And this is why I wake up thinking about AI in context to the Cold War, except of course that the doctrine of Mutually Assured Destruction was rational by contrast.


Photo by author.

Comparing AI Prompts to Button-Pushing on a Camera

Plenty is being said about AI systems that generate visual works, written works, music, etc. And plenty more will be said, especially now that lawsuits have been filed against some of the AI-generated image companies. In this post, I want to address a misconception about authorship in copyright law that may be warping the AI conversation. As I understand the argument, some AI proponents allege that the act of writing prompts is comparable to the act of pushing the button on a camera and, therefore, vests copyright rights in the proverbial “button pusher.”

Although it is possible to conceive a scenario in which this analogy might apply, it is important to first understand that the underlying premise (i.e., that button pushing establishes authorship in a photograph) is wrong. In fact, when photography emerged as the first machine-made work, it posed a challenge to copyright law that still provides an ideal context for discussing what it means to say that copyright protects creative expression the moment the author causes that expression to be fixed in a tangible medium. Note that the key ingredients are expression, an author, and fixation, and inherent to the process binding all three is an interval of human effort enabling the author’s concept (or vision) of the expression to be manifest as fixation.

With photography, the interval of effort may be stately or a mere fraction of a second, but copyright law does not discriminate between the photographer who carries a vision in her mind for weeks of preparation and arrangement and the photographer who captures a fleeting moment from real life. In both cases, triggering the shutter is the proximate cause of fixation,[1] but vesting copyright rights in the photographer is predicated on an assumption that, even in a fraction of a second, she made creative choices sufficient to find a modicum of original expression in the image.

Various Scenarios in Which It Is Not About the Button

In the case of a studio shoot with a lot of preparation, lighting, props, wardrobe, etc., the photographer may not even touch the camera very often. It may be mounted on a tripod with an assistant triggering the shutter from a computer or remote control while the photographer directs all the creative aspects that comprise the resulting images. Copyright holds unequivocally that this individual is the author of the photographs because it is his expression that is being fixed in each image, but the mechanical “button-pushing” is irrelevant except as a purely mechanical step in fixation.[2]

For the street photographer or photojournalist, the same principles apply, but copyright allows for the arguably metaphysical assumption that even in the tiny interval between seeing the real-life subject and capturing it, the photographer makes subtle choices that imbue the work with sufficient expression to be protected. Again, the button causes fixation but is not the basis of authorship, and this would be evident in the analysis of the content and qualities of the photograph, if it were to become the subject of a copyright infringement lawsuit.

By contrast, if a truly accidental photograph is captured (e.g., by a camera accidentally dropped from the Eiffel Tower), there is no authorship in that image—not because a human did not push the button, but because there is no colorable nexus between the human’s mental conception and the resulting photograph. On the other hand, if a photographer intentionally drops a camera from the Eiffel Tower and triggers the shutter by remote on its way down, copyright attaches to those images—not because a human pushed the button, but because a human conceived of the series of falling photographs and arranged the circumstances by which they could be made.

Although it is important to note that cameras are not machines trained with a corpus of existing photographs, this last example may be the closest analogy to the prompt directing the AI generator (in its current state) to make an image. If the prompt writer has a general sense of the image she wants to produce, but there is still an element of chance about what the machine will make, the prompt writer may argue that she is no less an author than the photographer who intentionally allows some element of chance into the process of making his images.

While this premise sounds reasonable as a general proposition, what it really implies is a case-by-case consideration as to how much human expression exists in the resulting works. Even in the example of the camera tossed intentionally off the Eiffel Tower, the photographer can control certain qualities in the images and may even have a vision for how they are to be used, displayed, or distributed. He knows the characteristics of the camera and lens and can select settings with the intent to control some of the qualitative results in the final photos.

By contrast, the prompter directing the image-generating AI is arguably not in control of enough of the qualitative elements in the final image to claim authorship—at least not at the current state of the technology. Entering the prompt “A mermaid wrestling a sea lion in outer space in the style of Cartier-Bresson” may produce an image that checks each of those boxes, but the prompt writer is not controlling the qualitative choices that comprise the result. Composition, line weight, shading, lighting, texture, scale, proportion, etc. are all “selected” by the AI based on what it has “learned” from the millions of visual works fed into its code, so there is a critical disconnect between the human’s vision of “A mermaid wresting a sea lion in outer space in the style of Cartier-Bresson” and the interval of effort that fixes the image in a tangible medium.

At some future state of the technology, the human may prompt a draft image to be made and then prompt changes to the qualitative elements, at which point it may be tough to deny that there is authorship in the resulting work. If these technologies develop in this way—such that the prompter is essentially painting with words instead of a stylus—this anticipates that, for instance, a disabled individual could truly create visual works with her mind akin to the way Stephen Hawking wrote books. But in this paradigm, the AI does not present a unique challenge to the concept of authorship because the human is in control of sufficient expression in the work.

Dynamic Ethical Standards

Of course, this theoretical discussion assumes integrity among individuals who claim authorship in various works. The guy whose camera accidentally snaps a photo does not have to admit he played no role in its making, and AI currently presents a similar challenge. The issue of integrity is a hot conversation we’re having in response to generative AI—especially in academia where ChatGPT is already “writing” papers for students. Notably, few people would question the judgment that the student who turns in a paper “written” by an AI is a cheat deserving the same sanctions as if he were caught plagiarizing. Yet, somehow, when the material is a “creative” work, AI advocates argue that the prompter is an author of a visual work comparable to a photographer using a camera.

This dichotomy can only be reconciled by confronting the fact that certain uses of AIs are not only not authorship but are needlessly destructive to the very purpose of intellectual and cultural endeavor. The student who shirks writing his own paper learns nothing and so, potentially graduates from a program unqualified. Likewise, the prompter using an image-generating AI is not an artist and contributes nothing to the purpose of art. Thus, while there may be uses for these systems, their potential cultural value depends on more than technological development for its own sake.

Because these technologies are still new and still primitive relative to their expected capabilities, it is hard to predict where the more serious aspects of the narrative will lead. Some of the generative AIs are barely more than toys at the moment (e.g., turning profile pics into oil paintings), but what they will do a year from now, let alone five years, will inform how we address the issues—cultural, legal, and ethical. For now, though, I insist that no, prompting is not equivalent to button-pushing with a camera, even if button-pushing were as significant as many people think it is.


[1] This is true with digital photography. With film, one could argue that the latent image on the negative is not fixation until it is at least developed because it cannot be perceived by either human or machine reader.

[2] And there are likely to be further steps like retouching or printing, which may fix the final version of the image.

Photo by author.

AI “Art” is Boring

Adam was bored alone; then Adam and Eve were bored together; then Adam and Eve and Cain and Abel were bored en famille; then the population of the world increased, and the peoples were bored en masse. To divert themselves they conceived the idea of constructing a tower high enough to reach the heavens. This idea is itself as boring as the tower was high, and constitutes a terrible proof of how boredom gained the upper hand. – Soren Kierkegaard (1843) –

I had not thought about Kierkegaard writing on the subject of boredom in years. The essay from which the above quote is extracted was a favorite in college for its biting humor, but something about Rogers Brubaker’s excellent article about democratizing culture sent me in search of my 38-year-old (ouch) copy of The Kierkegaard Anthology, and I think it was this paragraph of Brubaker’s which triggered the thought:

But the question is not just how many people engage in cultural production — it’s how people engage. The AI music company Amper promises to help customers “create your own original music in seconds.” The creativity involved is rather attenuated, amounting to editing and tweaking the music generated by the AI, but that didn’t stop Amper co-founder Drew Silverstein from evangelizing in a TED talk about how AI can “democratize music” by enabling “anyone to express their creativity through music.” 

That promise to “create your own original music in seconds” was the portkey back to Kierkegaard. “In the case of children, the ruinous character of boredom is universally acknowledged,” he writes, and, indeed, I maintain that boredom is the inevitable outcome of AI toys promising to make music, visual art, poetry, etc. We have all experienced as children and witnessed as adults that transition between playing with a new toy and rapid disenchantment because the toy fails to engage the imagination. I am not the only Gen-X parent, for instance, to notice that when LEGO began selling kits to build branded objects like Star Wars spaceships, my own children would usually complete the assembly once and then be done with the toy forever. By contrast, my contemporaries and I spent hours with sets composed of bricks and no predetermined design.

Kierkegaard proposes that the plebian bores others and amuses himself while the aristocrat amuses others and bores himself—a dialectic perhaps well suited to describe the inevitable use of AI machines to “make one’s own music or art.” At the current state of the technology, the input of the human user is barely creative—little more than dropping a coin in a jukebox—and thus, all users similarly situated are plebian bores for the time being. The works resulting from their prompts may amuse them (for a while), but they will mostly bore others who will only be interested in “making their own music” with the same toys. Before long, a million individual users of the music generating AI will achieve a collective homeostatic boredom—a two-dimensional Babel leading nowhere.

Perhaps one of these accidental works will reach escape velocity, break through the gravitational force of mass boredom and “go viral” for a fleeting period. Some AI-generated ditty might be next year’s “Baby Shark” or even share the apotheotic luminance of a “Gagnam Style.” Someone will choreograph a short dance to accompany the tune, and TikTokers will fall in line to perform their versions, and Big Tech will look down and see that it is good, and their disciples will proclaim, “Behold the new culture! The human songwriter is an anachronism.” And it will all be as boring as it is ephemeral.

It is possible, of course, that generative AIs will become sophisticated enough to be collaborative tools wielded by the human artists—that the human still selects and arranges the creative elements to achieve her vision while the AI “helps” in some way. If and when we get there, we shall see. But in the meantime, it is clear that AIs do not need to be more sophisticated to replace some creative human work right now. My good friend Marco North writes on Facebook to me, “A full roster of AI voice talent costs less than $100 a month, works 24/7 and [will] do endless revisions….Voice work is perfect gig work for actors, say goodbye to lots of that.”

A gifted polymath in film, photography, music, poetry, and prose—Marco writes a weekly blog called Impressions of an Expat. Initially written from Moscow, he now writes from Tblisi, and in his latest post, he describes a happenstance encounter with the statue of Georgian poet Vazha-Pshavela (Luka Razikashvili) and his feelings about AI “art.” He asks:

Who will be the subject of the next statue? An algorithm? Will there be streets named after TikTok? Will we name a playground after a Spotify playlist curator? These are the people that tell our stories now. Midjourney highway will take you there. Take a left at ChatGPT square, you can’t miss it.

Yes. That is a vision of a possible future. Of course, if the tech giants can make the world just boring enough, then certain humans will do what certain humans do. They will disassemble the unengaging toy and turn it into something else—something called art. And then, the world will start to be interesting again.