Podcast – Artists’ Rights with Musician Blake Morgan

Blake Morgan

If it can be difficult to keep up with artists’ rights in the news, that goes double for music. Fortunately, there are some incredible artists who devote as much energy and passion to rights advocacy as they to do making music—and among those individuals is Blake Morgan. Singer/songwriter, recording artist, indie label owner, and producer, Blake epitomizes the hard-working, middle-class artist—grateful to make music for a living, but still a guy with a mortgage and bills to pay. In this podcast interview, Blake and I discuss the major threats he sees to artists’ rights and why he keeps fighting the good fight. And to say the least, his optimism is infectious. Hope you enjoy!

Photo by: Taylor Ballantyne

Episode Contents

  • 02:37 – Why I fight for artists rights.
  • 07:22 – The biggest threats facing artists.
  • 11:52 – The American Music Fairness Act
  • 16:27 – Dying of “exposure.”
  • 18:40 – A middle-class face on the cause.
  • 24:00 – Spotify’s “big payouts” to artists.
  • 30:00 – Support for the TikTok legislation.
  • 36:10 – Private equity investment in music catalogs.
  • 45:00 – The VanGogh diversion.
  • 46:10 – Advice to the next generation.
  • 50:11 – The latest album Violent Delights.

Articles/Posts Cited:

Spotify “Loud & Clear” Payout Report

“Same Old Song: Private Equity is Destroying Our Music Ecosystem” by Marc Hogan

Trichordist Guest Post:  “A musician’s View of the TikTok legislation” by Blake Morgan

Copyright News to Watch

Copyright News

Publishers File Brief in Response to Internet Archive Appeal

On Friday, the publishers in Hachette, et al. v. Internet Archive filed their response brief opposing the archive’s appeal of its loss in district court. IA maintains that its practice of “Controlled Digital Lending” is not copyright infringement under the doctrine of fair use despite the lower court’s fast and resounding decision rejecting that defense in late March 2023. As the lower court stated:

At bottom, IA’s fair use defense rests on the notion that lawfully acquiring a copyrighted print book entitles the recipient to make an unauthorized copy and distribute it in place of the print book, so long as it does not simultaneously lend the print book. But no case or legal principle supports that notion. Every authority points the other direction.

Given the amount of Second Circuit precedent contributing to the district court’s four-day turnaround decision, it is hard to see how IA will fare any better on appeal. But we shall see.

Santos v. Kimmel May Be Entertaining

In a complaint filed in mid-February, American fabulist George Santos accuses comedian Jimmy Kimmel et al. of copyright infringement, fraudulent inducement, breach of contract, and unjust enrichment, all arising from Kimmel’s pranking Santos’s new gig making personalized video messages on Cameo.com. Kimmel allegedly tricked Santos by creating fake customer identities and then, posing as those “customers,” requested absurd material for Santos to record. Kimmel then played five of the Cameo videos on his TV show to mock Santos.

I read the Santos complaint over the weekend, and if the facts presented are accurate, the case could provide some interesting details for copyright nerds. But given that we’re talking about George Santos, the prudent course at the moment is to at least wait for the defendants’ response before commenting.

Goldsmith and Andy Warhol Foundation Settle

In a court filing on Friday, photographer Lynn Goldsmith and Andy Warhol Foundation (AWF) agreed to settlement terms, concluding the landmark fair use case. AWF will pay Goldsmith $10,250 based on the original licensing fee for use of her photo in the Warhol screen prepared for the Conde Nast issue, and it will pay another $11,272.94 in taxable costs. Both parties are responsible for other expenses and attorney fees.

“AWF’s position is that the original creation of the Prince Series was fair use, and that nothing in the Supreme Court’s opinion undermines that view,” the court filing states. Indeed, the legality of the entire Prince Series prepared by Warhol was not addressed in this case. But the clarification on “transformative” use delivered by the Court—and which led directly to judgments against appropriation artist Richard Prince—suggests that AWF should probably not evangelize that opinion in the art world.

More Suing of Generative AI

On March 8th, authors Abdi Nazemian, Brian Keene, and Stewart O’Nan filed a class-action lawsuit against NVIDIA Corporation, alleging copyright infringement of books for the purpose of training the NeMo Megatron–GPT, a Large Language Model (LLM). The claim rests entirely on the allegation of unauthorized reproduction in the training process, and as with the Chabon lawsuit against Open AI, plaintiffs here accuse the defendant of accessing large volumes of books from dubious sources…

Bibliotik is one of a number of notorious “shadow library” websites that also includes Library Genesis (aka LibGen), Z-Library (aka B-ok), Sci-Hub, and Anna’s Archive. These shadow libraries have long been of interest to the AI-training community because they host and distribute vast quantities of unlicensed copyrighted material. For that reason, these shadow libraries also violate the U.S. Copyright Act.

Given the general consistency in both the legal and factual allegations in most of the cases against generative AI developers, the earliest outcomes could signal a blowout for either creators or the developers. As stated in earlier posts, if the reproduction right is held to be violated in the process of machine learning, it is hard to see how any of the developers overcome that claim. The case to watch is arguably New York Times v. Open AI, because the Times has presented so much compelling evidence that works output by the system are substantially similar to works input by the system. In a close second, is probably Concord v. Anthropic, where music publishers have likewise presented evidence of substantially similar lyrics output by the system.

In hearing with Big Tech, senators make headlines, but can they make headway?

On Wednesday, January 31, the Senate Judiciary Committee presided over a dramatic hearing titled Big Tech and the Online Child Sexual Exploitation Crisis. The gallery was filled with family members representing young victims of sexual exploitation, drug-related deaths, and adverse mental health effects of social media that can lead to chronic illness and suicide. The witnesses who provided testimony and faced often tense grilling by senators included Mark Zuckerberg, CEO of Meta; Linda Yaccarino, CEO of X Corp; Shou Chew, CEO of TikTok; Evan Spiegel, CEO of Snap Inc.; and Jason Citron, CEO of Discord Inc.

By now, many highlights have been published in the press and on social media, including Senator Graham’s opening salvo telling the witnesses they “have blood on their hands.” There was also Sen. Hawley’s rhetorical grilling of Zuckerberg, asking whether he had personally created a fund out of his billions to compensate any families. And then, there was Sen. Whitehouse, who stated quite simply, “We’re here because your platforms really suck at policing themselves,” thereby summarizing a bipartisan sentiment that has produced five bills passed by this committee alone.

Dramatic moments aside, though, what, if anything, will get done this year? As committee members themselves noted throughout the hearing, this is a road much travelled, and little has been accomplished, either through legislation or as voluntary measures by the platforms, to address the kind of harms at issue. Big Tech’s “tobacco moment” was supposed to be in 2021 when key witnesses and whistleblowers testified that, yes, social media platforms can cause harm to users, are designed to be addictive, and that industry executives put revenue ahead of safety.

Notwithstanding Senator Cruz and other Republicans blasting Mr. Chew over the valid but separate matter of TikTok’s alleged obligations to censor and/or provide information to the Chinese Communist Party, nearly every senator reiterated a theme of rare unanimity on the central issues before the committee. There is, of course, no political downside for either party when the issues involve children, sexual exploitation, suicide, and fentanyl, and the target is Big Tech. There should be no doubt that the intent to legislate is real, but several senators alluded to the platforms’ lack of cooperation and their lobbying power to avoid federal intervention.

For instance, among the bills cited and not wholly supported by online platforms, the SHIELD Act would criminalize the nonconsensual distribution of intimate visual depictions of persons—a subject that has been on the Hill since Rep. Speier first introduced a bill in 2015. Now, with advancements in AI tools that can be used to generate synthetic sexual material using the likeness of a real person (e.g., what happened to Taylor Swift), the issue is more complicated. And by my count, there are at least two House bills responding to AI as a method to achieve potentially more harmful results than the distribution of existing recorded material.[1]

Presumably, Congress will need to harmonize legislative efforts where there appears to be some redundancy in the intent to mitigate harm based on the nature of certain material and/or the means of production and distribution of that material. Moreover, the various issues raised in the hearing imply distinct forms of accountability (e.g., the design of a platform potentially harming mental health; the handling of material uploaded by users; or platforms being more transparent about negative effects).

In a future post, I will try to summarize all the proposed legislation designed to address specific harms caused or exacerbated by social media platforms. But one subject raised on Wednesday, and which must come first, is revision of Section 230 of the Communications Decency Act. As discussed here many times, Section 230 has been improperly read by the courts as a blanket immunity from civil litigation for online service providers, regardless of how irresponsibly the operators may address harmful material uploaded by a user of the platform.

Section 230 Front and Center

Sen. Graham declared that it’s time to repeal Section 230, while other senators were more moderated, alluding to revision of the law. Regardless, there should be little doubt that Congress supports the premise that online platforms must be subject to litigation to incentivize more effective cooperation in addressing various harms. Most immediately, revision of 230 must make clear that platforms are not exempt from court orders to remove material that is harmful to the aggrieved party.

One of the most infuriating aspects of misapplication of 230 to date is not simply that the platform is never liable for the harm (because it may not be), but that a platform can avoid complying with injunctive relief—often little more than having the basic decency to remove material that is shown to be harmful. As Sen. Whitehouse made clear, the court is the venue for determining liability and remedies, and several of his colleagues noted that it is simply absurd that one multi-billion-dollar industry is automatically excused from those procedures.

Thus, as a foundational matter, it seems essential that Section 230 is substantially revised to ensure that people, like the families represented at the hearing, can pursue legal action without having the court automatically dismiss the claim. Of course, sound reform of 230 must reject the rhetoric of some lawmakers, including Sen. Cruz, who have muddied the waters with unfounded and unhelpful allegations of platform political bias. If nothing else, alleged viewpoint bias is not a subject of Section 230, and if lawmakers really want to help the kids, they must remain focused on ensuring that a family can have its day in court.

So, as stated, we’ve been here before. Wednesday’s hearing provided a pretty good highlights reel, but let’s see if this year, it can finally lead to any tangible solutions.


[1] Preventing Deepfakes of Intimate Images Act, and the No AI FRAUD Act.