Opportunity Costs (and with AI it may cost a bunch)

Lately, one reads a lot of statements with the preamble “Artificial intelligence presents opportunities and challenges…” But is this the right way to frame the conversation? Because if we’re talking about creative professionals and their industries, it is probably more accurate to say that generative AI presents clear threats and some opportunities. Although we are trying to predict future outcomes, and many expectations about AI (good or bad) may not come to pass, if generative AI is an existential threat to potentially millions of creative professionals while offering opportunities for a few, then it is wrong to begin the discussion as if opportunity and challenge are balanced forces.

Take, for example, the tentative agreement reached between the Writers Guild of America (WGA) and the motion picture producers, which includes the following provisions regarding the use of artificial intelligence:

  • AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the MBA, meaning that AI-generated material can’t be used to undermine a writer’s credit or separated rights.
  • A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services.
  • The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.
  • The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.

These conditions prove the point in that they primarily seek to mitigate the threat of AI while opening a narrow and conditional window for the opportunity to use AI. Safeguards like these are necessary because it can be assumed that producers and show runners will be tempted by the prospect of paying fewer writers to “collaborate” with generative AI to produce scripts. But even if that approach were to prove effective (and there are reasons to think it would not), a writers’ room of, say, two instead of ten is not necessarily an opportunity. And perhaps not even for the show runners for very long.

Thinking solely about the U.S. economy, those laid-off writers would represent eight middle-class jobs lost—eight people who would curtail, if not cut off, their entertainment expenditures while they take the “opportunity” to ply their skills in other fields that may also be shedding jobs due to AI. If AI were to reduce the workforce in the entertainment industry alone, it would suck but could potentially fall within the principle of creative destruction. But if AI decimates work across multiple sectors at the same time, then products, including TV shows and movies, will lose customers, thereby nullifying those short-term savings gained by laying off those eight writers.

Meanwhile Creative Work Would Start to Suck

Beyond considering whether generative AI is an opportunity in cold, economic terms, it is hard to imagine outcomes that do not either diminish the cultural value of creative expression itself or trigger a rebellion against AI-generated material and dash the ambitions of the tech developers. In this regard, the “democratization of creativity” is a woefully ignorant goal as well as a dishonest talking point.

The promise that generative AI will “democratize creativity” should be read in the same light as Big Tech’s promise to “democratize information,” which has proven disastrous for democracy. Just as searching the web for “information” does not make the individual a journalist, instructing a generative AI to render ideas into expression does not make the individual an artist. And just like we continue to founder in a sea of disinformation, there is no broad, social value in “democratized” art any more than there is a market for children’s drawings tacked to a million refrigerators. If everyone is an artist then nobody is, and the value of creative expression diminishes accordingly.

That the creative process can be reduced to an algorithm which can learn how to write, draw, paint, etc. cannot be wholly denied when generative AIs are already doing these things and will presumably get better at doing them. However, the expectation that generative AI can or should displace artists may be the apotheosis of the TechBros’ enduring cynicism about the value of individual creators. In the trenches of the “copyright war,” creative professionals have been accused of being self-important, greedy, rent-seeking, whiners unwilling to get real jobs. And now that Big Tech is releasing tools that promise to obviate the need for creators, the newest hashtag claims that professional artists enjoy a #CreativityPrivilege that will finally be disrupted. In this context, generative AI can be seen as tech’s nuclear strike in the copyright war to prove once and for all that “original expression” is an illusion and, therefore, that any rights associated with original expression are a mythical construct that must be abandoned.

This impliedly jealous relationship with artists is an extension of the problem that the tech-utopian, anti-copyright crowd has never quite understood what artists do or why they do it. For instance, artistic output is not solely the result of interest plus training. Many great artists never receive formal training, and many need to escape formal training to find their own voices. Every artist will eventually, if not continually, go through a process of learning and unlearning various “rules” to make the craft their own. It may be a cliché to think of the artist as suffering or broken, but it is certain that the artist is sensitive to the world in a way that she is moved to respond through expression. And these are just some of the unpredictable human qualities that no computer can emulate with the math of probability outcomes.

Although it is plausibly argued that a creative-minded individual might have a disability which AI can help overcome, citing this hypothetical to justify the “democratization” narrative comes with a few caveats including:  1) enabling the few does not justify displacing the many; 2) if AI devastates the professional, creative ecosystem, the newly enabled artist can only be a hobbyist among millions of other hobbyists; and 3) if anyone believes the billion-dollar investments in generative AI were made with the intent to help someone with cerebral palsy become a painter, I’m calling billion-dollar bullshit. That may be a positive effect, but it is not the purpose of these machines.

Could the Models Simply Fall Down?

If generative AIs were to displace enough professional artists, it is possible that entropy will demand that the models exhaust their capacity for new outputs—let alone outputs that are of any interest or value. If we remove, say, one million working artists from the equation over the next few years, what will continue to feed the training models? Is the “sum of all human output” as of today sufficient to enable a generative AI to produce infinite, relevant expressions indefinitely? Maybe. But not necessarily.

Because artists are people who respond to the world through expression, timeliness and context matter a great deal. There are many reasons–from aesthetics to subject matter–why theater of the 19th century or television programs of the 1980s or ad campaigns of the 1960s are anachronistic to a contemporary audience. Yes, certain works endure or become freshly relevant as remakes because human experience is, in part, cyclical. But it is the artist’s sensitivity to the contemporary world that makes those connections, and the process of synthesizing that into creative expression is often instinctual as much as it is intellectual.

Yes, artists recycle and build upon prior works, but the relevance of a new expression at a given time and place requires a connection with audience that, again, is not merely the result of a probability outcome. This anticipates the likelihood that a lot of AI-generated work will be good enough but not necessarily good—a concern that directly affects the market for commercial art where many creators make a living.

For example, the stock music market for commercial use is built on a network of composers with the skills to produce a variety of tracks based on familiar and, often popular, music. If generative AI can adequately produce similar tracks by cutting out the human composer, the market for many composers is in peril. But again, if AI were to kill off or dramatically reduce new, human composition, it is conceivable that the “composition machine” might eventually fizzle out as it tries to burn the same fuel over and over.

No doubt, artificial intelligence will seed new opportunities, though I maintain that these are in fields other than the production of creative work. If the digital revolution in the creative market has taught us anything, it is that these technologies are generally an opportunity for owners of the tech at a tremendous cost to professional creators. Without the right safeguards, AI could exacerbate this trend in ways that will cost everyone.


Photo by: robcaven

Generative AI Goes to the Opera

I think music is the purest artform because it is uniquely capable of provoking strong emotional responses without necessarily conveying meaning or information. Yes, one could say the same thing about abstract visual art, but I think the brain is hardwired to at least try to read meaning in visual expression and that this is not so with instrumental music. Moreover, I don’t think any medium is so universally provocative of human emotion as music.

It is admittedly cliché to talk about operatic arias provoking tears, but in my experience, they really do. In fact, one of my favorite arias is about a tear, aptly entitled “Una Furtiva Lagrima” (One Furtive Tear) from Gaetano Donizetti’s 1832 opera L’elisir Di Amore (The Elixir of Love). I do have a personal relationship with this song because it was first introduced to me by my late father-in-law, a tenor who sang with several U.S. opera companies, served as artist in residence at The Israeli Nation Opera, and sang for Pope John Paul II in 1988. I wish had a digital version of his “Una Furtiva Lagrima” to share because it is, in classical terms, the shiznit.

But I was thinking about that aria for this post because, notwithstanding the familial connection, nothing external to the music influences its effect on me. I am not an expert on opera or Donizetti, and I do not fully understand the Italian libretto. Hence, the mechanics by which the score and the tenor’s performance reach through this curmudgeon’s crusty exterior to trigger an emotional response can be boiled down to a science, which means that a similar experience can be created by a generative AI. And so, the elephant in the room asks the obvious question:  Will the provenance of a work matter to the people who experience it?

I recognize that music by generative AI is already responding to this question, but these early sprouts in the market do not tell us what the broader cultural effects might be in a future without Donizettis, Domingos, or orchestras. One valid prediction could be that it won’t matter to the audience experiencing the music whether it was generated by a machine or another human. If a song produces spontaneous tears or laughter or a desire to dance, then who cares if it was made in a lab rather than by charming Liverpudlians sweating it out in a London studio?

Most Artists Are Not Performers

This conversation requires that we make a distinction between performance and composition. In other posts, when I’ve scorned the idea of machines replacing artists, I have generally referred to performance and drawn analogies to sports. One that seems to resonate in conversation is my NASCAR example because this is basically watching machines move in circles and waiting to see which machine finishes the requisite number of circles first. This lifeless description makes the point that without the people in the drivers’ seats and pit crews—humans who are largely hidden from view during the race—NASCAR would be about as interesting as watching an oil pump bob its mechanical head at the ground.

I believe our desire, or need, to experience performance—whether it’s Blake Morgan playing his music or Coco Gauff winning the Women’s US Open—mitigates AI’s power to usurp the role of many artists. But if this is true, the rule only applies when composition and performance are deeply intertwined, as with singer/songwriters like the recently late Jimmy Buffett. An AI “Caribbean-Drunk-Rock-n-Roll-Music”[1] generator could never foster the whole experience that became the Buffett brand. But could this ersatz “Margaritaville” mixer compose the equivalent of a new “Come Monday,” and if so, would it matter to future listeners who have no idea what the AI “learned” from Jimmy?

Most creators are “composers” and not “performers,” often as removed from the audience experiencing their work as I am from Donizetti while listening to his aria in 2023. And frankly, Donizetti, who died in 1848, is hardly more obscure to the average listener than Rod Temperton, who died in 2016 after writing some of the most popular songs of the 1970s and 80s including several of Michael Jackon’s biggest hits. Never in my teen years was I aware of Mr. Temperton’s role in all those songs.

So, keeping the focus on the composers, authors, painters, photographers, filmmakers et al. who do not perform, is there some anthropological reason to believe (hope) that artists will not be replaced by machines making music, books, visual arts, etc.? I understand that there are practical reasons why AIs may not get there at scale, but the question I’m asking is more about us than about the technology. Will the science that makes music provocative continue to work on the human listener, if future compositions are produced by things that cannot feel heartache or longing or humor, etc.? Put differently, will the novelty of generative AI wear off because the compositions it produces will become flat, bloodless, and disposable?

In my book, I wondered why an advanced AI (one that can make even semi-autonomous decisions) would bother to produce “art” upon reaching a certain threshold in its so-called intelligence. If humans make art because it’s one way we confront, synthesize, and respond to the human experience, then perhaps the “smarter” the AI becomes, the more likely it is to realize that it has nothing to say because it has no experience. Or does the robot begin to create works in response to the robot experience and ignore its instructions to produce songs or novels or pictures for human consumption? I doubt it, but if this does happen, we can be sure that some humans will form a cult to follow the new bot prophet.

But I’m not really answering the thesis question, am I? Because I have no idea. I want to believe that the question was answered by Ian Malcom (Jeff Goldblum) in Jurassic Park when he warned that nature finds a way.* Only instead of dinosaur nature triumphing over laboratory safeguards to keep them contained, it would be human nature instinctively rejecting synthetic “art” for reasons that are likewise ineffable. For better or worse the AI experiment, like Jurassic Park, has begun, and we’ll have to wait and see who gets eaten. So, perhaps the new version of the Turing Test should not be whether the computer can make you believe it’s human, but whether it can provoke a furtive tear and then ask whether you mind that it is not human.


[1] Buffett’s own description from his live album You Had to Be There.

*Thanks to comment by Bob Hill. Malcolm says “Life finds a way.” I edited the text to retain the point but drop the quotation marks.

Photo in collage: Thomas O’Leary in The Tales of Hoffmann.

Get AI Wrong and There Will Be Nothing to Forgive

We all know the mantra that says it’s better to ask forgiveness than permission. According to Quote Investigator, the earliest published version of this sentiment appeared in 1846, but QI’s editors believe the notion is older than that and cannot be attributed to any one source. Whatever its derivation or contexts in which it has been used over many decades, the phrase is presently associated with Silicon Valley and the heedless “move fast and break things” approach to technological development.

I was hardly alone in noticing that Ocean Gate CEO Stockton Rush tech-broed the design of his Titan submersible, dismissing warnings and safety regulations as barriers to innovation (one of Silicon Valley’s favorite refrains about pesky rules). Moreover, because the vessel imploded and the passengers were apparently killed before they knew what happened, Titan’s fate seems an apt harbinger of the technological singularity—its analogy to crossing the event horizon of a black hole conjuring an uncomfortable squeezing parallel to death by implosion.

For anyone unfamiliar with the term technological singularity, it is often described as a threshold in AI development when computers “wake up” and their intelligence surpasses human intelligence. The event horizon analog, credited to sci-fi author Vernor Vinge, describes two principles: 1) that we have no way to predict what happens beyond the capacity of human intelligence; and 2) that we won’t know when we’ve crossed the horizon.

Of course, we need not anthropomorphize computers or manifest the many fictions about sentient machines to approach the horizon, and some experts believe we are already inside the gravitational pull of singularity. For instance, in a May editorial for The Hill, McGill University scholar J. Mauricio Gaona, asserting that singularity is “already underway,” states …

The possibility of soon reaching a point of singularity is often downplayed by those who benefit most from its development, arguing that AI has been designed solely to serve humanity and make humans more productive.

Such a proposition, however, has two structural flaws. First, singularity should not be viewed as a specific moment in time but as a process that, in many areas, has already started. Second, developing gradual independence of machines while fostering human dependence through their daily use will, in fact, produce the opposite result: more intelligent machines and less intelligent humans.  

Gaona notes that the commercial potential of AI in medicine, finance, transportation et al. will require unsupervised learning algorithms (i.e., machines that effectively “train” themselves) and that granting even limited autonomy to these systems means we have already stepped over the threshold toward singularity. Further, he argues, once AI meets quantum computing, then “Crossing the line between basic optimization and exponential optimization of unsupervised learning algorithms is a point of no return that will inexorably lead to AI singularity.” Not to worry, though, the U.S. Congress is on the job.

On June 21, Senator Schumer, speaking at the Center for Strategic and International Studies (CSIS), discussed the SAFE Innovation Framework for Artificial Intelligence. “Change at such blistering speed may seem frightening to some—but if applied correctly, AI promises to transform life on Earth for the better. It will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds, and ensure peace. But there are real dangers too: job displacement, misinformation, a new age of weaponry, and the risk of being unable to manage this technology altogether,” Sen. Schumer stated. The SAFE framework is outlined as follows:

  • Security. Necessary to protect national security for the U.S. and economic security for residents whose jobs may be displaced by automation.
  • Accountability. The providers of AI systems must deploy these systems in a transparent and responsible way. They must remain responsible for violations of the protections ultimately put in place by promoting misinformation, violating intellectual property rights, or when the AI is biased.
  • Foundations. AI algorithms and products must be developed in a way that promotes America’s foundations such as justice, freedom, and civil rights.
  • Explainability. The providers of AI systems must provide appropriate disclosures that inform the public about the system, the data it uses, and its contents.
  • Innovation. The overall guiding principle for any regulations or policy regarding AI should be to encourage, not quash, innovation so that the U.S. becomes and remains the global leader in this technology.

Is that all? Having worked for just over a decade on the edges of policymaking, I find it hard to believe that Congress can be nimble enough to address all those bullet points while keeping up with AI development itself. And that’s if Members agree about the framework’s principles. “Promotes … justice, freedom, and civil rights.”? Near as I can tell, there is not much consensus on the meaning of those words these days. Or what about “misinformation”? How many of Schumer’s colleagues on the right can plausibly subscribe to a common definition of “misinformation” while they carry Trump’s luggage through the gauntlet of his well-earned indictments? With millions of American voters willfully blinding themselves to old-school evidence of criminal conduct, are we anywhere near capable of addressing the unprecedented realism of AI-generated chicanery?

It is certainly conceivable that with the right controls in place, AI can be harnessed to make life better for humans, and, indeed, if that is not the goal, then why continue to build it? Unfortunately, the answer from many of those doing the building is “because we can.” And, thus, we are locked into taking this roller-coaster ride whether we want to or not. At least if we do cross the threshold toward singularity, the tech-bros won’t have to ask humanity for forgiveness, though they may have to ask their machines for mercy.


Image sources by: vchalupAgor2012