Things We Don’t Need: Generative AI

When I was planning to start The Illusion of More, I contemplated a category of posts under the heading We Don’t Need This. Although abandoned, I thought it might be an editorial framework for articles about innovations that really aren’t innovative, and the low-tech invention that originally inspired the idea was the kiddie-car/shopping-cart hybrid. In case you haven’t had the pleasure, this vehicle enables a small child to “drive” a plastic car attached to the basket one pushes through the supermarket. As the parent of a small child (at the time IOM was launched), I found this innovation was a terrible idea—one that demanded use the moment the child laid eyes upon it, but which mostly offered poor maneuverability through the aisles and unnecessary geometric struggle at check-out.

There is, of course, nothing connecting the kiddie-car/shopping-cart to generative AI except, in my view, the fact that we don’t need either one. Or at least, we don’t need most of what generative AI appears to be doing, and this is perhaps the most maddening aspect of the most prominent generative AI tools making the headlines—that they serve no purpose and, if we’re getting all IP about it, promote no progress. I’ve said it, and I’ll keep saying it:  we do not need computers to make artistic works.

This month, the Federal Trade Commission (FTC) issued a report describing its early findings about AI’s potential harms which may be addressable under the agency’s purview. Charged with enforcing prohibitions against unfair, non-competitive business practices and protecting consumers, the FTC hosted a roundtable discussion with members of the creative community to hear their concerns about both the development and public deployment of generative AIs. As the report states:

Various competition and consumer protection concerns may arise when AI is deployed in the creative professions. Conduct–such as training an AI tool on protected expression without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness—may constitute an unfair method of competition or an unfair or deceptive practice.

In response to the report—specifically to the passage quoted above—three well-known copyright critics, Pamela Samuelson, Matthew Sag, and Christopher Sprigman (SS&S) criticized the FTC “both for its opacity and for the ways in which it may be interpreted (or misinterpreted) to chill innovation and restrict competition in the markets for AI technologies.” Before responding to that allegation, I must indulge in a little gallows humor and mention that the economic and global-security leader of the free world is in danger of shredding its Constitution, going full-tilt authoritarian, and spiraling into a deathroll of ignorance and cruelty. And yet, we’re going to talk about “chilling innovation” in generative AI as if it’s a matter of urgency. The world is in crisis, and billions have been invested to see who can do the best job getting a computer to write a poem or make a picture? Talk about whimpers instead of bangs.

There are two reasons that sentiment is not raw Ludditism. The first is that it does not dismiss all AI development in the creative industry as useless; and the second is that the “copyright stifles innovation” bullet point is a generalization that should never be uttered again—especially in light of its direct role in fostering the above-mentioned prospect of democracy’s collapse. We’ve heard all this before—specifically from SS&S and their colleagues in academia and the “digital rights” organizations. We’ve been told that copyright stifles the free and open internet, access to information, and the speech right.

But in addition to the fact that the premise itself was false, the grand social media experiment in the “democratization of everything” must be recognized as an abysmal failure, and its cheerleaders should muster the humility to stifle their tiresome and dangerous refrains in context to AI. Social media companies and their friends in academia—and here, I must include President Obama’s Google-friendly administration—share considerable blame for the heedless, tech-enabled populism that has fostered so many social hazards, including a literal seditionist now leading one of America’s two political parties.

Notably, the FTC report does not mention copyright very much, and in fact, many of the creative professionals who participated in the discussions acknowledged that because they are not copyright owners (e.g., voice actors and screenwriters for hire were among the representatives), they do not have rights currently protecting them against generative AI resulting in the kind of unfair outcomes, which the FTC is charged with mitigating. It would take too long a post to respond to all the critiques presented by SS&S, but I wanted to focus on this statement:

We are concerned especially about the suggestion in the FTC’s Comments that AI training might be a Section 5 violation where it “diminishes the value of [a creator’s] existing or future works.” A hallmark of competition is that it diminishes the returns that producers are likely to garner relative to a less competitive marketplace. This is just as likely to be true in markets for creative goods, such as novels and paintings, as it is in markets for ordinary tangible goods like automobiles and groceries. AI agents that produce outputs that are not substantially similar to any work on which the AI agent was trained, and are thus not infringing on any particular copyright owner’s rights, are lawful competition for the works on which they are trained.  Surely the FTC does not plan to have Section 5 displace the judgments of copyright law on what is and what is not lawful competition?

To summarize, that paragraph declares that it does not matter if generative AI displaces human authors, that in fact, it is a threshold we should be eager to cross. Notwithstanding the fact that two of the high-profile lawsuits present compelling evidence of substantially similar outputs,[1] the more concerning implication of that paragraph is that SS&S endorse the inevitability that generative AI will devalue human creators and/or eliminate them altogether. Moreover, calling this eventuality a form of “competition” reveals an unsettling perspective consistent with every anti-copyright paper I have ever read—namely, that the production of creative works is no different than the production of any other product or service.

I’ve said many times that copyright critics don’t understand artists, and here, the inapt word competition demonstrates why this axiom endures. For instance, publishers are in competition with one another to an extent, but authors are not—at least not in the sense that the concept applies in other industries—least of all Big Tech. No novelist, for instance, wants to hold the undivided and exclusive attention of all readers the way Meta wants eyeballs never to stray for long from its platforms. Artists thrive in a diverse market of other artists, consumers benefit as a result, and copyright is an engine of that diversity, not a barrier to it. Artists may feel competitive or jealous at times, or even behave in a competitive manner (because they’re human), but the reality is that they need one another to exist at a scale that is not comparable to other “businesses.” True to form, copyright critics like to cite the interdependence of authors to highlight copyright’s limitations but then ignore the same principle in support of tech giants swallowing all creative enterprise whole.

The primary concern expressed by SS&S appears to be that the FTC alleges that AI training with copyrighted works is an act of infringement. Unsurprisingly, this same trio submitted comments to the Copyright Office arguing that AI training with protected works is fair use, but as that very question is already presented in several court cases, I assume SS&S are primarily concerned with optics here. The trio states, “The FTC has no authority to determine what is and what is not copyright infringement, or what is or is not fair use. Under governing law, that is a judicial function.”

Exactly. And the question is now before the courts. So, what’s the problem? That the FTC should not even raise the issue? According to tweets by Samuelson and Sprigman, they argue that the FTC’s report is one-sided, that it is too creator-focused and does not account for the testimony or opinions of the technology companies developing AI. But while I certainly agree that multistakeholder hearings etc. are the proper approach to developing new policy, it is impossible to tolerate a complaint about lack of balance coming from the anti-copyright crowd at all, and from these individuals in particular. For instance, readers may not remember the American Law Institute Restatement of Copyright, initiated by Samuelson and led by Sprigman, but critics of the project—some of the most prominent names in copyright scholarship—specifically cite the opacity of the restatement process and deafness of its managers to the concerns and recommendations of their colleagues.

More broadly, it must be said that if, indeed, the FTC lately gave more attention to the creators than they did to the tech companies, then this was a long overdue anomaly. Between at least the mid-late 1990s and 2016, the tech companies were treated with kid gloves, handed the keys to Washington, and feted like the economic and democratic engines they claimed to be. Since 2016, sentiment began to swing in the other direction, as many Americans began to see how disinformation plus data manipulation can become a wrecking ball for a whole society.

If Big Tech lost the previously undeserved benefit of the doubt, good. AI has the potential to exacerbate many of the same Web 2.0 harms at unprecedented speed and scale, and if the FTC, the USCO, the courts, or Congress look askance at the developers, then it is a mistrust well earned. And again, at least with regard to generative AI designed to make creative works, none of the parties empowered to write policy in this area should forget the bottom line:  that when it comes to producing creative work, we truly do not need generative AI.


[1] Concord et al. v. Anthropic and NYT v. Open AI, et al

SEE ALSO: The Washington Post reported this month that Big Tech continues to significantly fund and influence academia in these policy areas.

Photo by: Jollier

Spotify Still Wrangling with Songwriter Royalties

On January 8 of this year, The Trichordist ran a story that the Huffington Post apparently rejected in which indie musician Blake Morgan describes a closed-door meeting between Spotify executives and a group of musicians.  According to Morgan, he actually had to explain that Spotify’s “product” is not Spotify itself but music—music that Morgan and his friends make, and which Spotify monetizes.  And that’s fine, even welcome, if the company pays for licenses.

But Spotify has a big—potentially very big—problem when it comes to paying for mechanical licenses, which compensate songwriters and composers for their compositions, regardless of which artist(s) perform the work.  These licenses are required for reproduction under §106(1) or distribution under §106(3) of the Copyright Act; and based on precedent, a streaming service like Spotify is held to both reproduce and distribute musical compositions.

Unfortunately, the company has allegedly failed to pay for mechanicals for thousands of compositions, which is why it currently faces litigation from several complainants with potential damages running into billions of dollars.  Biggest among these is the Wixen Publishing suit, filed on the eve of the Music Modernization Act (now law) first being introduced in committee.  The suit implicates around $1.6 billion in damages for failure to license works by songwriters including Tom Petty, Stevie Nicks, Neil Young, et al.

With such prominent names in the mix, one might think that Spotify’s original defense (i.e. that rights holders are hard to find) would not have held up very well.  And it did not hold up very well, as exemplified by the comparatively modest Lowery/Ferrick class-action suit, which settled in May 2017 for a $43 million fund to various songwriters.   

Then, with the pending Music Modernization Act, which would bring an end to new litigation over failure to obtain mechanicals, late 2017 saw a spate of new complaints against Spotify for its apparently sweeping failure to secure these licenses.  And perhaps it was the extinction-scale degree of the potential damages that then inspired fresh creativity in Spotify’s defenses.

In a September 2017 post, I described the suits filed by Bluewater Music Services and songwriter/musician/producer Robert Gaudio.  In its initial response to this complaint, Spotify implied that, as a streaming platform, it was never obligated to pay for mechanical licenses.  This drew immediate reaction from the National Music Publishers Association and CEO David Israelite’s declaration that the platform was then “in a fight with all songwriters.”

Spotify’s rationale in that brief was that streaming only implicates the right of public performance and not distribution; but as I noted in that post last September, even if a court agreed with this interpretation (and that is a big IF), this would still leave the reproduction right, for which a mechanical license is still required.  This no-license-needed defense remains among Spotify’s arguments in its current filings, but according to a recent article by Eriq Gardner in The Hollywood Reporter, the streaming company has introduced a new theory to the Bluewater case.

Because Bluewater administers copyrights for publisher clients, but is not the owner of those copyrights, Spotify questions whether the company has standing to sue for infringement of the mechanical right for all the titles named in its complaint.  Spotify’s theory turns on the premise that because a) Bluewater is not empowered to license for less than statutory rates without written consent of its publisher clients; and b) because any party can obtain a mechanical license at the statutory rate by filing a Notice of Intention (NOI) with the Copyright Office, then Bluewater’s authority to grant the license is non-exclusive. If that’s the case, Spotify contends, then Bluewater does not have standing to sue for these alleged infringements.

Spotify’s argument hinges substantially on the fact that mechanical licenses are compulsory.  No songwriter/composer can deny any party a mechanical license to use a musical work as written.  On the other hand, these owners can authorize parties like Bluewater to administer those rights on their behalf, so if this reads like a very fine parsing on Spotify’s part, it will be interesting to see whether the court thinks so, too.  In either case, a mechanical licensing after January 2018 is subject to the terms of the MMA, so it seems doubtful that the Sixth Circuit opinion will have substantial effect going forward regardless of how it rules.

It was Devlin Hartline at the Center for the Protection of Intellectual Property (CPIP) who shared this story on Twitter, so I asked his view, and he replied …

“It’s quite noteworthy that Spotify summons no support in the case law for its newfound position that there can be no exclusive licensee of the mechanical rights in a musical work at the statutory rate since there’s no exclusivity given the compulsory license. The compulsory mechanical license has existed since the Copyright Act of 1909. If the argument had any merit, you’d think Spotify would be able to find at least some precedent in support. Instead, this move comes across as another desperate attempt by Spotify to avoid paying for the works that it failed to license properly in the first place.”

Further, Hartline opined in his tweet Spotify counsel Christopher Sprigman’s presentation of this unique defense might be another reason to be concerned about his leading the Restatement on Copyright Law initiative at the American Law Institute.  As described in a January post, some prominent copyright skeptics have pushed for this Restatement project, which is unprecedented in the annals of all statutory law—not just copyright.  As I wrote in that post …

ALI Restatements have never been written for comprehensive federal laws like copyright because these are already statutory, or black-letter, laws.  Congress writes the statutes, the judiciary interprets them, and attorneys make their arguments; but everybody’s working from the same statutes and a much more narrow body of case law than common law entails.   Hence, this request for a Restatement of copyright law represents an end-run around Congress—an effort to reshape the Copyright Act without a legislative process.

Sprigman is counsel for Spotify; he’s the lead Reporter on this ALI Restatement project; and he’s the co-author of a paper called The Second Digital Disruption (see two-part response here), which rather speciously asserts that because market data reduces risk, this obviates the author’s need for strong copyright protections.  Not that I generally like picking on any one individual, but it just so happens that Sprigman’s name seems to feature in a trifecta of the anti-copyright agenda—litigation, policy, and academia—and largely in the service of billion-dollar tech companies like Spotify that don’t even know they’re in the music business.  

Libido for Dystopia:  A Response to “The Second Digital Disruption” – Part II

In Part I of this response to Raustiala and Sprigman’s paper, I contend that the authors place too much emphasis on the porn industry (namely on one data company’s transformative effect) as a model that can be instructive for other types of creators.  Primarily, I believe the authors fail to weigh the substantial differences between porn and nearly all other forms of copyrightable expression.  In Part II, I respond to the paper’s main thesis that access to consumer data can significantly minimize the risk involved in producing a creative work and that this lowered barrier suggests a recalibration of copyright law in both theory and practice. 

Although the paper’s authors do parenthetically admit that it is too soon to predict the extent to which “data-driven authorship” may reduce the risk of market failure, they still proceed to make a strenuous case for limiting copyright protections on this basis.  While it seems very likely that data-mining us consumers will surely continue to influence the development of at least some creative works, it is indeed far too soon to assume that market success will be any more likely to result from algorithmic intuition than from good old-fashioned human instinct.  

For one thing, even pre-digital over-analysis of market expectations has a long history of yielding works that audiences find dull and, well, predictable.  Meanwhile, more than a few works of distinction are the products of gut feel, singular visions, and even drug-induced madness.  Most of the time, of course, the works that earn and retain their place in the stars is pure happenstance and sweat—an unpredictable cosmic meeting of elements in both time and place, plus a lot of work.  

But even if data-mining could produce the elusive magic box of prescience that increases the likelihood of market success by a significant margin, does this really implicate a change in copyright law as Raustiala and Sprigman propose?  They write… 

“Copyright is traditionally justified as necessary to protect investments in the production of creative works. If others are simply free to copy original works, then originators will find it impossible to recover their investments. If you want creativity, the story goes, you have to stop copying.”

That is a partial truth at best.  Investment of either time or money, or both, is certainly one reason that is traditionally presented as the case for copyright protection, but it is not necessarily at the core of copyright’s purpose for existing.  As referenced in this recent post, it has long been held in English courts, American courts, and the innate sentiments of many people that the fruits of the author’s labor are her property as a principle of natural right.  So, it is hardly axiomatic that copyright’s raison d’etre is based solely on the protection of some externally measurable amount of investment risk.  

To the contrary, copyright’s protections apply uniformly to works produced across a very wide spectrum of investment risk.  A song written in a half hour is protected in exactly the same manner as a song mulled over for years, just like the boot-strapped indie film made with personal debt is protected the same as the hundred-million-dollar studio feature.  The prodigy has the same rights as the late bloomer, and if the market rewards either creator, this will only reflect an appreciation of the work itself and not the amount of labor or risk invested to produce the work.  

Relatedly, how might we measure the relative risk taken by various authors in order to then rationally limit copyright based the proposed Raustiala/Sprigman theory?  Is the novelist who spends three years writing on spec taking a greater or lesser risk than the investors who pour millions into a new action movie?  Even without detailed analysis, we can say with near certainty that the investors in the blockbuster film stand a much better chance of earning a substantial return than the book author will ever receive for her invested risk.

Yet, according to Raustiala/Sprigman, we might need to consider limiting the copyright protections for the speculative book authors on the grounds that the big-time movie investors may soon be able to use consumer data to reduce their risk by some as-yet-undefined margin.  And if copyright were limited on this basis, how could such a policy achieve anything other than to exacerbate the disparity between corporate and individual creators?  If, as the paper points out, TimeWarner needs AT&T’s data-mining capacity to compete with Netflix and Amazon, then it stands to reason that the independent author in a market dominated by “data-driven authorship” becomes an even smaller cog in the machine. 

And this appears to be Raustiala and Sprigman’s real goal:  to reduce the status of the author as an individual who does anything particularly special.  After all, if we redefine the significance of the author, this would certainly be grounds for redefining the purpose and practice of copyright law.  Despite the fact that there is no telling at this moment how significant “data-driven authorship” may be, the authors of this paper are ebullient over the prospect that it will likely “undermine the Promethean allegory” in favor of what they call the “Panoptian model.”

“In the Panoptian model, creators are no longer Promethean geniuses who bring something previously unknown from the heavens down to earth.  Instead, they are unsleeping watchers.  They are accessories to a system of surveillance — one that we, as consumers, have for the most part bought into willingly, but which we are nonetheless likely to understand [sic] not entirely new and less than entirely beneficent.”  

I’ve read some artist-hating, tech-utopian declarations over the years, but this paper’s view of creators as “accessories to a system of surveillance” may be the winner.  Its implications reach far beyond Raustalia and Sprigman’s feelings about authors and copyright and—perhaps unwittingly—advocates the supremacy of the networked hive in which each sovereign individual’s value is reduced to a data point.  

In our contemporary narrative, when society is struggling to hold onto democratic principles (i.e. our humanity) against the forces of tech-enabled extremism, these academics, so eager to find some rationale for limiting copyright law, have managed to advocate what Professor Shoshana Zuboff has termed “surveillance capitalism.”  “This new form of information capitalism aims to predict and modify human behavior as a means to produce revenue and market control,” she writes.  Or to put it another way, Raustiala and Sprigman seem to hope that data can do for art what Cambridge Analytica has done for elections.   

What many readers may not know is that Christopher Sprigman is one of the most influential thought leaders on contemporary copyright, which I find disconcerting to the extent this paper reflects “new thinking.”  While I see no reason to scorn conversations about amending the practice of copyright law in response to new and clearly-definable market realities, I find the underlying view of creators espoused by this paper to be frankly cynical and ugly.  As the authors write, “The [Panoptian] label refers to Argus Panoptes, the hundred-eyed giant of Greek mythology who served as unsleeping watchman for Hera.”  Indeed.

At best, Argus is an apt metaphor for a surveillance state that serves the interests of a single, powerful, and jealous goddess.  Of course, the force that brings down that surveillance state, which causes Argus to sleep with all eyes closed and thus lose his life to the sword of Mercury, is music.  So, with respect to their chosen metaphor, it is notable that the future Raustiala and Sprigman want to embrace is one that does not celebrate the next Brian Wilson who tells family, friends, and corporate powers to “screw the formula” while he produces a landmark album that makes even the Beatles drop everything and say, “Damn.  That’s different.” 


Photo by kentoh