Can an AI Own a Copyright?

(Image source by kentoh)

Remember Clippy?  He was the animated paper-clip assistant, who lived several years ago amid the code of Microsoft Office. He would pop up rather suddenly on your desktop and interrupt your work to offer unsolicited advice as to how the work might be improved.  He was so annoying that Bill Gates reportedly sent an email memo to his staff titled Clippy Must Die.  And although Clippy does indeed lie in a virtual, unmarked grave somewhere in Redmond, perhaps we are not too far from seeing consumer and business software with integrated AIs that are subtle and helpful enough to be appreciated by the market. If so, what if anything, does this mean for copyright?

Early this month, Kalev Leetaru, writing for Forbes, published an article asking the hypothetical question as whether or not an AI algorithm may someday own a copyright.  The blunt answer to this question should be No.  Intellectual property law is entirely based on understanding and valuing human intellect and creative capacity. No bots need apply.

Of course, the larger question Leetrau asks, and to which he alludes in his article, is whether or not we might soon have to endure theories about civil rights in general for AI algorithms.  It is a dismaying irony, to say the least, that Moore’s Law implies that an AI may attain a consciousness we call “existence” long before we come anywhere close to achieving civil rights for all humans. But then, isn’t this exact narrative often the theme of science fiction in which the AI’s take over because we don’t know how to behave?  More to the point, science fiction has frequently answered the thesis question at hand by predicting that if the AIs actually wake up and become self-aware, the matter of their rights will no longer be our choice.  In that scenario, the AIs become the dominant species, and our so-called rights will be a subject of their mercy—or sense of our usefulness.

But between now and the technological singularity that may never occur, the question of intellectual property for AIs probably will become the focus of some litigation in the relatively near future. If we think of an AI not as autonomous being but as a human-owned and programmed machine that may produce a creative work the human owners did not truly imagine would be made, then we can expect the company that owns the AI to register the resulting works, just as they would register works made for hire by an employee.  But should they be allowed to do so?

Even if the AI-produced work met the conditions necessary for a company to register a work made for hire (and this seems unlikely because an AI is not an employee), the broader issue of the work’s copyrightability dates back to the early days of photography, which was the first time the courts had to answer the question as to how much human influence is involved in producing a machine-made work. That case law begins in the U.S. in 1884; so although the progress of AIs may be a highly-contemporary subject, the copyright question Leetrau raises is not necessarily so novel as it may appear.

As a rule, some modicum of human creativity—and therefore, some purposeful imagination of what the resulting work will be—has to be present for copyright to exist. The more clearly the human choices are observable in a work, the stronger the copyright claim will be.  So, if a human invents a “creative” AI but has no clear expectation of what the AI is going to produce, this is roughly parallel to installing a security camera which then autonomously captures random images that are not properly copyrightable.

Still, I think we can safely expect that a corporate owner of an AI that produces a creative work will want to register a copyright in that work. If so, the issue of protection should turn on the extent to which the human(s) had any creative influence to produce the work. But proving or disproving this factor may be quite difficult and not honestly represented by the AI’s owner.  And who knows if we can count on the testimony of the AI itself.

Perhaps the more likely, near-term scenario is one in which a work is collaborative between an author and a consumer-product AI owned by a large company like Adobe.  Because the new iterations of “assistants” won’t be pesky animations telling you that you’ve misspelled addendum; they will be seamlessly integrated partners that can subtly contribute revisions reflecting an intuitive “understanding” of your intent.  At the very least, it’s easy to imagine business communicators relying on such advanced AIs to transform gobbledygook emails or texts etc. into coherent missives.

We’re already seeing products that use adaptive AI for photography; every few months, it seems there’s another announcement that some new and terrible musical work has been produced by an AI; and people have been experimenting with AI and screenplay production for years. Even if the AIs don’t take over, they are likely to become more involved, and the more a creator allows a machine to make choices, the more her claim of copyright may be weakened in an actual litigation.

So, what if a creative human truly collaborates with an AI to the extent that the AI makes a substantial and measurable contribution to the finished work?  Let’s face it, if a robot can feel enough existential angst to commit suicide by drowning itself in a fountain, a robot artist will soon be among us.  Then, what happens if, for instance, a composer collaborates with an AI through a portal that is networked and monitored by the AI’s corporate owner?  Is this a road that leads to the corporate entity claiming joint ownership of the work?

Under current copyright law related to “jointly made” works, it would be quite difficult for the AI owner to demonstrate a) that the AI is “human enough” to claim an IP right at all; or b) that the human inventor/owners of the AI contributed through their invention to the finished work.  Plus, there must be an initial intent to create a jointly made work in order for all collaborators to claim ownership.  But, if the makers of AIs sought to claim some ownership in the works produced, they could lobby to change how the law defines “jointly made” works, at which point it will be interesting to see if the EFF fights for AI rights.

One way or another, copyright expert Sandra Aistars, Clinical Professor of Law at George Mason University, suggests that as AIs advance in this way, “User agreements would become even more important because that is where companies creating AIs would deal with the requirement that there be an intent to create a joint work.  Authors using new, adaptive tools would need to be more vigilant about paying attention to terms of service and end-user agreements.”

It’s tough to predict where this is leading.  What I do anticipate is that if the AIs themselves start asserting copyright ownership of their works and their AI attorneys engage in cyber-litigation over AI-to-AI infringement claims, the whole network will probably crash, and the last creator standing will be Clippy.

AI & Ethical Determinism

Photo by moipokupkigmailcom.

As artificial intelligence (AI) moves from the realm of science fiction to everyday science, leading  technologists and scientists are asking themselves questions about how to write an ethical code. The most widely reported ethical dilemma involves the self-driving automobile and what’s known as the Trolley Problem.  This hypothetical challenge asks whether or not you would make the decision to divert a speeding train from one track to another knowing that doing so will kill one person but save the lives of several others.

This problem is transposed to the world of driverless vehicles and how a car’s AI should make split-second, life-and-death decisions. It’s a challenge that doesn’t just raise technological or ethical questions, but psychological ones as well.  Because the ways in which we cope with tragedy—or the anticipation of potential tragedy—in modern, developed society does not generally encompass the kind of determinism implied by AI.

Many cope with tragic accidents through faith—a belief that there is a deity with a plan, even if that plan cannot be known.  Those of us who are not religious cope without faith in a plan—by making peace with the fact that chaos and human fallibility can produce terrible outcomes.  In either case, there is a degree of comfort in the uncertainty—not comfort that lessens the pain of loss per se, but comfort that enables us to rationalize terrible events or to step outside the door without living in abject fear.  The uncertainty coupled with probability, and maintaining a measure of control, allows us to confidently get into our cars and drive around without expecting to be wiped out.  Remove a factor, for instance the measure of control, and this likely explains why more people are anxious about flying than riding in cars despite the statistics proving that the opposite should be true.

In this regard, when the unforeseen event occurs—brakes fail, an obstacle suddenly appears in the road, etc.—the outcome of the split-second reaction of a human driver is arguably more the result of chance than of any kind of rational decision-making. Even in relatively common examples, drivers are told that they should almost never swerve to avoid wildlife running across the road and risk hitting an oncoming car or slamming into a tree and killing themselves rather than hit a squirrel. But that instinct to avoid is strong and gets stronger when the animal that suddenly appears is a stray cat or dog.  The point is that whatever the outcome—whether a flattened squirrel or several dead motorists—the whole series of events, including the driver’s reaction, can be chalked up to a degree of uncertainty, and in that uncertainty lies many of our psychological coping mechanisms.

But what happens when humans pre-determine the outcome of certain ethical dilemmas and encode these into machines that we then grant authority to make these decisions?  In the simple case cited above, the squirrel is killed and all humans live, but what about a split-second decision that will result in the death of a passenger versus the deaths of a mother and baby crossing the road?  In the Summer of 2016, MIT researchers, grappling with exactly these types of questions, launched a website called Moral Machine which asks users to make a set of lose-lose decisions in various hypothetical traffic scenarios in which some parties face certain death.  Anyone can take the “test”, and the site will reveal how you “score” relative to the ethical decisions made by others.

Of course, the Moral Machine tests present the user with information that a car’s AI would, in principle, never know—like the fact that some of the potential victims are criminals.  But in a more likely scenario, age is a factor in some of the scenarios, a condition that strikes me as being more credible—that a vehicle might know it’s carrying a septuagenarian couple and may, therefore, decide that it is more ethical to kill them rather than a young family. The senior couple might even make the same selfless decision themselves, but such calculations don’t really occur when a human operator is reacting to an emergency faster than he can think.

What’s eerie about some of these Moral Machine tests is the implication that the data set used to enable an AI to make ethical decisions could theoretically include more than mere numbers (i.e. that the machine would simply default to save more lives than it takes).  Age could be a factor, but what about net worth or relative “value” to society?  Does the AI wipe out a whole busload of kids to save one physicist or surgeon or even a Kardashian? What about race or sexual orientation? This then begs the question of whether or not these pre-determined decisions are public knowledge or trade secrets, both of which present huge and unprecedented moral dilemmas.

In this regard, an article that appeared just a few days ago tells us that the next generation of self-driving cars from Mercedes-Benz will be programmed to save the passengers regardless of circumstances.  On the one hand, this is an algorithmic variation on the theme that the privileged class enjoy a priority lane on the right to life ahead of everyone else; but there is also something to be said for Mercedes choosing not to become trapped in the moral miasma of programming an ethical AI. Perhaps if all vehicles are required by law to default to a single directive like try to save the passengers, then this would approximate the instinctive but fallible reactions of human drivers and still allow uncertainty to play a role, thus absolving engineers of the responsibility to “play God.”  At least until the AI becomes self aware and begins to make such decisions on its own.

After all, it’s hard not to notice the dystopian implications of a man-made, ethical determinism when we remove the element of chance and cede authority to carry out life-and-death decisions to machines.  When we remove the psychological buffer provided by chance, fate, God’s will, etc., then tragic events naturally beg explanation and, therefore, an instinct to assign blame. This of course raises the companion question about those who would inevitably try to game the system, people who would “jailbreak” their vehicles to override any code that might not favor them as the chosen survivors of an accident.  Suddenly, this places the civil libertarians who complain about the “right” to tinker with technological property on the wrong side of that argument insofar as the theoretical “greater good” is concerned.

The ethical AI question also becomes another factor leading to the conclusion that autonomous vehicles might not be private property for very long. Rationally, to have an AI shuttling all of us hither and yon, the system would have to be level in order for it to be remotely ethical, suggesting that the private property models from Mercedes or Tesla or Ford are merely stepping stones toward a public system, or a highly regulated one. But these outcomes are not what manufacturers or leading data companies investing in this future are going have in mind.

This is one reason I agree with President Obama, when he said in a recent Wired interview with Joi Ito conducted by Scott Dadich, that it is essential that public funding play a role in the development of AI.  “…part of what we’re gonna have to understand is that if we want the values of a diverse community represented in these breakthrough technologies, then government funding has to be a part of it. And if government is not part of financing it, then all these issues that Joi has raised about the values embedded in these technologies end up being potentially lost or at least not properly debated,” said Obama.

Of course, the president is referring to developing an ethical AI beyond just vehicles, and his point is well taken.  The sci-fi future of AI is already here.  But the questions as to what values drive the decision-making are just barely being asked in the public debate. Meanwhile, the corporate rhetoric of “disruption” has already absolved many sins in the areas of privacy and intellectual property infringement.  Or as Sam Kriss put it so well in his excellent article for The Atlantic:  “Silicon Valley works by solving problems that hadn’t heretofore existed; its culture is pathologically fixated on the notion of ‘disruption.’ Tech products no longer feel like something offered to the public, but something imposed: The great visionary looks at the way everyone is doing something, and decides, single-handedly, to change it.”

NPR reported in late September that computer scientists from the major tech giants—IBM, Apple, Amazon, Microsoft, Facebook, & Google formed The Partnership on Artificial Intelligence to Benefit People and Society.  One of the goals of the partnership is to develop standards for best practices in the field of AI, including tackling ethical questions like the Trolley Problem for vehicles.  But it is essential that the public interest be represented in the development of these technologies, and as much as I may fault the Obama administration for being too Googley in various policy areas, I also credit the president himself for apparently thinking deeply about questions like how we develop an ethical AI. At the present rate of development and investment, let’s hope the outgoing president is not the last public representative to keep this conversation in the foreground.

Sci-Fi Film Written by AI is Still Fundamentally Human

Image by Pond 5
Image by Pond 5

Back in June, ArsTechnica hosted the online debut of a short film called Sunspring. Directed by Oscar Sharp and featuring the actors Elizabeth Gray, Humphrey Ker, and Thomas Middleitch, the film was made for the Sci-Fi London film festival according to guidelines for the 48-Hour Film Challenge, and it placed in the top ten out of hundreds of entries.  What is most distinctive about Sunspring, though, is its screenwriter Benjamin. No last name.  At least not one he’s told anyone yet.  You see, Benjamin is an AI.

Writer Analee Newitz describes Sunspring as the product of Sharp’s own fascination with artificial intelligence, which led to his friendship and collaboration at NYU with researcher Ross Goodwin. Listed in the film’s credits as Writer of Writer, Goodwin is the chief architect of the AI—an LSTM recurrent neural network—that would eventually name itself Benjamin. “To train Benjamin, Goodwin fed the AI with a corpus of dozens of sci-fi screenplays he found online—mostly movies from the 1980s and 90s. Benjamin dissected them down to the letter, learning to predict which letters tended to follow each other and from there which words and phrases tended to occur together,” writes Newitz.  The whole process itself is very interesting, and I recommend reading her article to learn more.

The finished film is definitely engaging, though I would not personally subscribe to the descriptions hilarious and intense as stated in Newitz’s headline. But to each his own, and headlines are headlines.  What Sunspring emphasizes for me, of course, is not a contemplation of machine intelligence but the significance of human interpretation. Benjamin’s absurdist script is a list of non-sequiturs, both in dialogue and stage direction, making the film project an experiment that almost asks the question, “Can we make a watchable movie based on the screenplay of a madman?”  The answer is of course you can.  Because cinema is very much an interpretive medium—both for makers and viewers. We can’t help but interpret; it’s what humans do.

The distinction between Sunspring and the oeuvre of human-crafted, experimental, non-narrative cinema—sometimes comprising stream-of-consciousness writing akin to Benjamin’s composition of algorithmic probability—is subtle to the point of nitpicky. Sunspring is odd, yes, but barely so if one is familiar with a film like Daisies or Hallelujah the Hills or the works of David Lynch.  The difference, of course, is that Sunspring’s absurdity—at least at the script stage—is accidental while these other works are not. Having said that, though, artists do make instinctive choices all the time that defy literal analysis, and audiences make poignant meaning from of these expressions that were never intended or even considered by their authors.

Sunspring’s script is humorously absurdist, though presumably not in a manner of which its author could possibly be aware.  The experience of watching the finished product shares strands of comedic DNA with the same mechanism that makes the Bad Lip Reading series work—because it’s funny when a real person or a character says something absurd in an earnest manner.  When BLR has Mitt Romney on the 2012 campaign trail say to a supporter “Thank you for the bench,” the same comedy chromosomes are at work as when Sunspring’s Humphrey Ker says, “We’re going to see the money.”  Benjamin has no idea why these things are funny, but they are funny in a non-literal way that is indisputably human.

Sunspring may represent a baby step toward the expectation that an AI will inevitably write a traditional, narrative screenplay for a major motion picture.  As I wrote in a very early post, a comparison between human-only, formulaic script development and machine-made or assisted, formulaic script development may prove to be indistinguishable.  Instead of leading down that path, however, Sunspring reminds us that cinema is often most compelling when convention and formula are broken.  And giving the responsibility to an AI of writing the blueprint for a film is certainly one way to achieve broken conventions—not unlike the artist who might experiment with narcotics to break down barriers to his or her subconscious.  Naturally, the more an AI resembles or reflects us, the more we assume its destiny is to replace us.  This is always the two-part conversation, right? There’s the gadget question that asks what an AI can accomplish, but there’s also the existential question that asks at what point we can say the AI has an identity, which is really a reflexive inquiry about our own existence.

So, here’s a hall-of-mirrors thought exercise:  might a more advanced AI than Benjamin have written a very different screenplay for the film The Enigma Code about the life and work of Alan Turing?  Personally, I like certain things about that film but was ultimately disappointed because I felt the work neglected an opportunity to explore the narrative in which the father of AI—the inventor of the Turing Test to determine the “identity” of the machine—was a man who literally had to pretend to be someone he was not.

So, if Benjamin’s great-grandson were the co-writer of a biopic about Alan Turing, might “he” bring a unique empathy for Turing’s duality given the AI’s own centaur-like existence?  And if so, wouldn’t we have to call that writing?  I think we would. On the other hand, absent the capacity for empathy or the existential question, the script is just barely structured words on a page that, as in Sunspring, only humans can interpret has having any meaning at all.