Can an AI Own a Copyright?

(Image source by kentoh)

Remember Clippy?  He was the animated paper-clip assistant, who lived several years ago amid the code of Microsoft Office. He would pop up rather suddenly on your desktop and interrupt your work to offer unsolicited advice as to how the work might be improved.  He was so annoying that Bill Gates reportedly sent an email memo to his staff titled Clippy Must Die.  And although Clippy does indeed lie in a virtual, unmarked grave somewhere in Redmond, perhaps we are not too far from seeing consumer and business software with integrated AIs that are subtle and helpful enough to be appreciated by the market. If so, what if anything, does this mean for copyright?

Early this month, Kalev Leetaru, writing for Forbes, published an article asking the hypothetical question as whether or not an AI algorithm may someday own a copyright.  The blunt answer to this question should be No.  Intellectual property law is entirely based on understanding and valuing human intellect and creative capacity. No bots need apply.

Of course, the larger question Leetrau asks, and to which he alludes in his article, is whether or not we might soon have to endure theories about civil rights in general for AI algorithms.  It is a dismaying irony, to say the least, that Moore’s Law implies that an AI may attain a consciousness we call “existence” long before we come anywhere close to achieving civil rights for all humans. But then, isn’t this exact narrative often the theme of science fiction in which the AI’s take over because we don’t know how to behave?  More to the point, science fiction has frequently answered the thesis question at hand by predicting that if the AIs actually wake up and become self-aware, the matter of their rights will no longer be our choice.  In that scenario, the AIs become the dominant species, and our so-called rights will be a subject of their mercy—or sense of our usefulness.

But between now and the technological singularity that may never occur, the question of intellectual property for AIs probably will become the focus of some litigation in the relatively near future. If we think of an AI not as autonomous being but as a human-owned and programmed machine that may produce a creative work the human owners did not truly imagine would be made, then we can expect the company that owns the AI to register the resulting works, just as they would register works made for hire by an employee.  But should they be allowed to do so?

Even if the AI-produced work met the conditions necessary for a company to register a work made for hire (and this seems unlikely because an AI is not an employee), the broader issue of the work’s copyrightability dates back to the early days of photography, which was the first time the courts had to answer the question as to how much human influence is involved in producing a machine-made work. That case law begins in the U.S. in 1884; so although the progress of AIs may be a highly-contemporary subject, the copyright question Leetrau raises is not necessarily so novel as it may appear.

As a rule, some modicum of human creativity—and therefore, some purposeful imagination of what the resulting work will be—has to be present for copyright to exist. The more clearly the human choices are observable in a work, the stronger the copyright claim will be.  So, if a human invents a “creative” AI but has no clear expectation of what the AI is going to produce, this is roughly parallel to installing a security camera which then autonomously captures random images that are not properly copyrightable.

Still, I think we can safely expect that a corporate owner of an AI that produces a creative work will want to register a copyright in that work. If so, the issue of protection should turn on the extent to which the human(s) had any creative influence to produce the work. But proving or disproving this factor may be quite difficult and not honestly represented by the AI’s owner.  And who knows if we can count on the testimony of the AI itself.

Perhaps the more likely, near-term scenario is one in which a work is collaborative between an author and a consumer-product AI owned by a large company like Adobe.  Because the new iterations of “assistants” won’t be pesky animations telling you that you’ve misspelled addendum; they will be seamlessly integrated partners that can subtly contribute revisions reflecting an intuitive “understanding” of your intent.  At the very least, it’s easy to imagine business communicators relying on such advanced AIs to transform gobbledygook emails or texts etc. into coherent missives.

We’re already seeing products that use adaptive AI for photography; every few months, it seems there’s another announcement that some new and terrible musical work has been produced by an AI; and people have been experimenting with AI and screenplay production for years. Even if the AIs don’t take over, they are likely to become more involved, and the more a creator allows a machine to make choices, the more her claim of copyright may be weakened in an actual litigation.

So, what if a creative human truly collaborates with an AI to the extent that the AI makes a substantial and measurable contribution to the finished work?  Let’s face it, if a robot can feel enough existential angst to commit suicide by drowning itself in a fountain, a robot artist will soon be among us.  Then, what happens if, for instance, a composer collaborates with an AI through a portal that is networked and monitored by the AI’s corporate owner?  Is this a road that leads to the corporate entity claiming joint ownership of the work?

Under current copyright law related to “jointly made” works, it would be quite difficult for the AI owner to demonstrate a) that the AI is “human enough” to claim an IP right at all; or b) that the human inventor/owners of the AI contributed through their invention to the finished work.  Plus, there must be an initial intent to create a jointly made work in order for all collaborators to claim ownership.  But, if the makers of AIs sought to claim some ownership in the works produced, they could lobby to change how the law defines “jointly made” works, at which point it will be interesting to see if the EFF fights for AI rights.

One way or another, copyright expert Sandra Aistars, Clinical Professor of Law at George Mason University, suggests that as AIs advance in this way, “User agreements would become even more important because that is where companies creating AIs would deal with the requirement that there be an intent to create a joint work.  Authors using new, adaptive tools would need to be more vigilant about paying attention to terms of service and end-user agreements.”

It’s tough to predict where this is leading.  What I do anticipate is that if the AIs themselves start asserting copyright ownership of their works and their AI attorneys engage in cyber-litigation over AI-to-AI infringement claims, the whole network will probably crash, and the last creator standing will be Clippy.

© 2017, David Newhoff. All rights reserved.

Follow IOM on social media:

Join the discussion.