Udio Answers Record Labels’ Complaint in Gen AI Lawsuit

As mentioned in my last post about the record labels’ lawsuits against GAI companies Suno and Udio, I will generally focus on the latter case. Both cases are almost identical, but because UMG et al. v. Ucharted Labs Inc. is at the SDNY (in the Second Circuit), those proceedings may be followed by other courts with considerably less copyright law precedent.

Udio’s answer filed on August 1 relies substantially on the premise that there is no cause of action whatsoever. The developer intends to show that “This lawsuit…seeks a genuinely unprecedented result: a ruling that it is actionable copyright infringement, not fair use, to have copied Plaintiffs’ works as part of the process of developing a new technology, even though the ultimate outputs of that new technology are themselves non-infringing.”

The list of counter-factual evidence Udio forecasts is too long to summarize, but the heart of its legal argument at this juncture is that 1) statute explicitly bars protection of musical style; 2) its AI training process entailed learning about music rather than copying protected sound recordings; and 3) because the product’s outputs are largely (or entirely) non-infringing, the purpose of producing “new” music is “what copyright law is designed to encourage, not prohibit.” The defendant also alleges that the recording industry’s claim is invalid on the basis that it has “misused copyright law” as part of a longstanding tradition of stifling competition. So, we have a ballgame that’s going into extra innings, and there will be plenty to say about the details as they emerge.

For now, I take issue with a few premises inherent to Udio’s answer. The first, which I have already stated a few times, is that even if every song output by Udio is “new” as a matter of law, the fact that none of these outputs is a work of “new authorship” as a matter of law militates against Udio’s implication that its product expands the purpose of copyright in general. And as stated, if that is correct, this should militate against a finding of fair use.

Second, despite the fact that Udio can and does point to uses of its product by creators who are plausibly engaged in authorship is, at best, a difficult basis on which to argue that the primary purpose of the product advances authorship. For one thing, the business model appears to be based largely on providing a music toy for consumers, not a tool for creators. Next, even where Udio may be used by professional music creators, the extent to which this fosters new authorship is a case-by-case consideration—one that relies on still-developing doctrine around the use of AI and authorship.

Third, even if Udio could prove allegations of relevant, anti-competitive practices among all the record labels (and I do not mean to suggest they can), the court must remain focused on the interests of individual creators—especially the next generation of music makers. The labels’ argument that the outputs compete with demand for existing sound recordings could be read as protectionism of existing catalogs but should be considered as to whether Udio competes with, or even obviates, the need for new human authorship in music. If so, this is categorically not what copyright law is designed to foster.

As stated in a few posts, and in comments to the Copyright Office, the unique challenge presented by GAI is that rather than pose a threat to the interests of specific authors’ works, it poses a potential threat to authorship itself. In this light, Professor Jane Ginsburg, in a new paper about the state of fair use jurisprudence, discusses two points that stand out for me at the moment. First, she describes the nature of a use-based fair use analysis (as applied in Warhol), which should not “untether” the fair use protection for a use other than the one narrowly ruled on by the court. Second, she notes that the courts may look beyond the “explicit direction” of the fair use statute to consider a factor like broad effect on authors’ careers—or even the potential for other unlawful uses like forgery or fraud.

With regard to use-based analysis, Ginsburg forecasts the uncertainty in adopting a per se fair use rule for machine learning because the consideration of fair use of the inputs may turn on the nature of the outputs. “If an AI system ingests multiple images of apples, including Cézanne’s depictions (let’s assume Cézanne’s works were still under copyright) its training data will enable the system to “know” both what an apple looks like, and what a Cézanne apple looks like. The fair use inquiry may depend on whether the user asks for an apple, or for a Cézanne apple,” Ginsburg writes.

Perhaps more directly applicable to the labels’ case against Udio, Ginsburg states in regard to image-generating AIs and fair use factor four, the effect of the use on market value:

… even under a solely work-based interpretation of section 107(4), one may observe that the wholesale copying of an artist’s works into training data in order to enable stylistically similar outputs jeopardizes not only the artist’s future employment or commissions, but also devalues the actual works copied, because the image-generation program can produce outputs that compete with already-created works as well.

That same rationale would seem to apply to the labels’ evidence that Udio can output sounds which are substantially similar to famous and protected sound recordings. So, while the defendant is correct to say that copyright does not protect style and that music production relies substantially on mixing and matching a finite combination of styles, arrangements, etc., that premise, both statutory and judicial, is derived from a copyright history that has only ever included human artists in “competition” with one another. Consequently, the courts have latitude to find that it is in fact the AI developer who is seeking the novel conclusion that its machine furthers the purpose of copyright law.

As I say, there will be plenty of details to follow and plenty of considerations to nerd out on, if one is so inclined. And for better or worse, I am so inclined. Stay tuned (pun intended).

No FAKES Act Introduced:  A Big Deal for Performing Artists and Everyone Else

No FAKES

Ever since the generative artificial intelligence (GAI) controversy began heating up, I’ve had several conversations with friends and colleagues who are voice actors and have had to disappoint them by repeating the fact that copyright law does not protect a person’s “likeness,” which includes one’s voice. And I’ve had similar conversations with colleagues focused on replication of likeness for the production of nonconsensual pornography. Nevertheless, the instinct makes sense—that the same human-centric principles that protect “authorship” might apply to the human’s likeness as well. Now, that basic sense of justice is articulated in a new bill introduced in the Senate.

Historically, the protection of likeness has been the subject of a relatively narrow area of law called the right of publicity (ROP), a common-law right with statutory provisions in 25 states—and narrow because ROP typically applies to the unauthorized use of celebrity likeness for commercial advertising purposes. But with the introduction of the No FAKES Act, Congress proposes to substantially change the protection of individual likeness in direct response to the capacity of GAI to conjure just about anything from fake news to fake performances by actors and musicians.

Introduced by Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC), the acronym stands for Nurture Originals, Foster Art, and Keep Entertainment Safe Act. The heart of the bill establishes a property right in the likeness of any person, living or dead, and prohibits digital replication without permission. Similar to copyright rights, the “digital replication right” is vested in every individual regardless of whether one commercially exploits one’s own likeness, and the right is licensable and transferrable to heirs and assigns after death. Post-mortem rights would last 10 years but may be extended through a renewal and registration process administered by the U.S. Copyright Office if the right holder can show active and authorized public use of the voice or visual likeness.

The bill anticipates legitimate creative and newsworthy uses of unlicensed replication and exempts a broad range of uses for purposes like news, documentary, parody, etc. For a purpose to be “newsworthy,” the replicated individual must be the subject of the material created—e.g., a story about Hugh Jackman, not merely a replication of him “cast” for free in your film or commercial. Further, the bill explicitly states that creating a false impression that a given replication is an “authentic” recording of the individual will still trigger liability under the new law. Thus, the documentarian who uses a replication in a scene that looks like real surveillance or cellphone footage will probably need to identify that material as AI generated to avoid liability.

Remedies for violation of the digital replication right range from damage awards of $5,000 per depiction made by individuals or by online providers; and $25,000 per depiction by corporate entities other than online providers. Plaintiffs may also seek actual damages and attorney fees, and courts may award punitive damages where unlawful replications entail malice, fraud, or willful ignorance that the use violated the law.

Finally, taking a page from the Copyright Act, No FAKES contains a DMCA-like takedown provision for removal of content alleged to be an unlawful replication, and this provision includes maintenance by the Copyright Office of a database of “agents” to whom such complaints must be submitted. Likewise, familiar safe harbor provisions apply to both product developers and platforms that may, without the knowledge of these providers, be used to produce or distribute unlicensed replications.

Given Silicon Valley’s poor record for compliance with the DMCA for copyright owners, the takedown provisions in No FAKES naturally raises questions about everyday removal of material, which is often the first, if not the main, remedy non-performers will care about. Regardless, from my perspective, the bill both recognizes a wide range of abuses of GAI replication and exempts or limits liability for an appropriate range of legitimate, First Amendment protected uses of the technology.

More than a good start, No FAKES appears to draw from many lessons learned over the past 20+ years pitting human and creative rights against the predatory “progress” of Big Tech. I join the Human Artistry Campaign in endorsing this bill and encourage the full Senate to pass it as soon as possible.


Image source

COPIED Act Introduced in the Senate with Focus on Content Provenance

copied act

On July 11, Senators Cantwell, Blackburn, and Heinrich introduced a bill called the Content Origin Protection and Integrity from Edited and Deepfake (COPIED) Media Act. One of many AI related bills in Congress, the heart of COPIED is transparency in artificial intelligence through implementation of content provenance information (CPI). COPIED requires development of industry standards to create “machine-readable information documenting the origin and history of a piece of digital content, such as an image, a video, audio, or text.”

The Commerce Committee press release announcing the bill states endorsement by News/Media Alliance, National Newspaper Association, Rebuild Local News, NAB, SAG-AFTRA, Nashville Songwriters, Recording Academy, RIAA, music publishers, artists, and performers. Senator Heinrich, who sits on the Senate AI Working Group, stated, “I’m proud to support Senator Cantwell’s COPIED Act that will provide the technical tools needed to help crack down on harmful and deceptive AI-generated content and better protect professional journalists and artists from having their content used by AI systems without their consent. Congress needs to step up and pass this legislation to protect the American people.”

In a nutshell, the bill calls for advanced, hard to remove, watermarks (or metadata) which would be permanently attached to digital content. In what sounds like a combination of copyright management information (CMI) and a chain of title concept, the development of CPI would enable tracing and validating the source of digital content with a variety of goals, including mitigation of deepfake or modified news stories and use of protected creative content without permission.

The COPIED Act would require the Under Secretary of Commerce for Standards and Technology to oversee the development and implementation of CPI in collaboration with the Register of Copyrights and the Director of the U.S. Patent and Trademark Office. If passed and effectively implemented, the law would prohibit removal, alteration, or tampering with attached CPI for deceptive or adversarial commercial practices; and one part of Section 6 of the bill begins, “It shall be unlawful for any person, for a commercial purpose, to knowingly use any covered content….” [emphasis added] This focus on use of material with attached CPI will be of greatest interest to creative professionals concerned about the myriad ways in which their work is used without permission for the development and commercialization of GAI.

Of course, there are miles to go before we see if and when this bill makes progress, at which point it may provoke some familiar arm flapping by the Electronic Frontier Foundation (EFF) recycling the same rhetoric it used to complain about digital rights management (DRM) technology. EFF lost its campaign to prove DRM under §1201 of the Copyright Act is unconstitutional, while this bill’s proposal for CPI is more reminiscent of §1202 under which it is unlawful to remove copyright management information (e.g., a watermark) for the purpose of copyright infringement. It strikes me that a similar approach would apply to removal of, or tampering with, content provenance information. After all, if it is designed to be as robust and tamper-proof as the bill projects, this would suggest its removal takes some effort and expertise, which itself implies a purpose that is likely to be unlawful.

Stay tuned. We shall see where this goes, but the aims of the COPIED Act strike me as a well-founded good start.


Photo source by: