Generative AI is a lot Like a Video Tape Recorder, No?

In my last post, I focused on the hypothetical fair use defense of generative AI under the principles articulated in the Google Books decision of 2014. In this post, I want to address another claim that has arisen—both on social media, and in comments to the Copyright Office—namely that generative AI companies should be shielded against secondary liability for copyright infringement under the “Sony Safe Harbor.”

This refers to the 1984 Supreme Court decision in Sony v. Universal (The “Sony Betamax” Case), holding that the video tape recorder (VTR) is legal based on two interrelated findings: 1) the fair use opinion that consumers had a right to “time-shift” the viewing of televised material; and 2) therefore, the VTR would be used for substantially non-infringing purposes. Thus, although some parties would inevitably use the VTR for infringing purposes, Sony Corporation could not be liable for contributory infringement in such instances.

Clearly, there are some bright, shining distinctions between the VTR and a generative AI. The VTR was not designed by inputting millions of AV works into a computer model, and its purpose was not to generate “new” AV works. Instead, those obsolete machines performed two very basic functions: they made videotape copies of AV material, and they displayed copies of AV material for a specific type of personal use.[1] As noted in the post about Google Books, the Court in Sony also had a fully developed product and a clearly defined purpose in the VTR. And again, this is not so with respect to understanding the purpose of a given generative AI.

I believe the novelty (and even the uncertainty) of the AIs purpose is fatal to the argument that generative AI companies are necessarily shielded by the “Sony Safe Harbor.” This is because in Sony, the anticipation of substantially non-infringing use rests on the novel “time-shifting” notion introduced into the fact-intensive fair use finding. In other words, “time-shifting” was a principle specific to the technology at issue, and no analogous concept lurks anywhere in the purpose of a given AI, let alone all AIs still in development. Imagine if Sony Corp. walked into court with a box of assembled electronic parts, declared that they’re not quite sure what the box can or will do yet (though it might distribute homemade copies into the market!), but they would really like a fair use decision and liability ruling in their favor.

Non-Infringing Use Under Different Rationales

To be clear, it is plausible—even reasonable—to expect that the majority of outputs by a generative AI are, or will be, non-infringing. In fact, I believe this is one of the pitfalls when it comes to hoping that copyright can address the presumed threat of AI outputs:  because the substantial similarity bar, finding that Work A infringes Work B, is thrown into a doctrinal tailspin. For example, when a person knowingly copies a work, this fosters a strong claim of infringement, but independent creation is a non-infringing act. And then, there are shades in between willful infringement, innocent infringement, and non-infringement, depending on the facts of a particular case.

In addition to copyright’s limiting doctrines, which allow myriad “similar” works to coexist without legal conflict, I predict that generative AI has the potential to warp the evidentiary foundations necessary to a substantial similarity test to prove infringement. If that is correct, it may be one rationale for predicting widespread non-infringing use, but it is highly distinguishable from the foundations for the “Sony Safe Harbor.” Meanwhile, the consideration of secondary liability (as with fair use) depends substantially on the purpose of the technology at issue—and that purpose remains unclear.

The mundane, mechanical VTR only potentially threatened the “making available” rights for works produced and owned by creators. This is not remotely comparable to a computer model “trained” with millions of protected works for the purpose of enabling that computer model to produce new “works.” To paraphrase my brief comments to the Copyright Office, if a particular work goes into the machine and a potentially infringing copy of that work comes out of the machine, I do not believe there is any authority which broadly shields the developer from liability.

With that example in mind, though, it is worth noting that a code-based service, unlike a physical electronic device, can be revised concurrent with delivery to the market. Thus, unlike Sony and its Betamax, the AI developer looking to limit liability for copyright infringement has the opportunity (dare we say obligation?) to make every effort to design and continually update a system to avoid copyright infringement. This may entail licensing materials used to “train” a generative AI and/or ongoing tweaking of the algorithm to avoid infringing outputs. Either way, if the developers don’t want to build these kind of safeguards for the most revolutionary tech of 2023, surely they cannot be allowed to hide behind a liability shield established in 1984 for a box now collecting dust in the attic.


[1] They also frustrated many consumers who tried to set the clocks, but that’s another matter.

Photo by: Tamer_Soliman

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)