The Pelosi “drunk” video is not only disturbing, it’s probably illegal.

There should be little doubt that the video clip doctored to make Speaker Pelosi look drunk should be seen as a sign of new hazards to come in the digitally-enhanced war on reality.  The video is not even very sophisticated compared to what is actually possible right now with technology like “deepfakes,” and we can expect to see far more clever uses of fabricated video that are subtle enough to seem plausible—perhaps even fool experts before long. 

Moreover, it should be recognized that most of us have bigger public profiles than we would have had twenty years ago.  Replace the Speaker with a university scholar or artist or corporate executive that some disgruntled party wants to harm, and the relative ease of reputation destruction should be a chilling thought for anyone with a social media account and photos or videos of themselves online. (Show of hands?)

Regardless of where one nets out on Facebook’s handling of the Pelosi “drunk” clip—leaving it online with caveats that it is a fake—it should probably be viewed as an outlier in terms of guidance for content removal specifically because it involves a high-profile elected official and is, therefore, news itself that perhaps should be viewed in that context.  But the video also implicates three violations of law that Facebook could choose to find instructive to its evolving moderation policy.

For consideration, recognize that the Pelosi “drunk” video is intrinsically copyright infringement, libel, and an infringement of the subject’s first amendment rights.  Any one of these should recommend removal as the default choice for the platform, but checking all three boxes should be a no-brainer.  It should also be noted that doctored video used to malign individuals is a byproduct of a culture skewed by the misconception that every video, photo, etc. online is available for common use; and in this regard, the copyright analysis helps identify what the Pelosi video truly is in a legal sense.

Why the Pelosi “Drunk” Video is Not Fair Use

Were the maker of this video to be sued by the copyright owner of the source material, his counsel would no doubt try to defend the fake as “transformative” commentary or parody (and the folks at EFF might even hold their noses and write a supporting brief), but any court that would allow this defense to be considered would have to blind itself to the fact that the sole purpose of the use was to fabricate newsworthy evidence of an event that never happened.

While free speech protects the right to mislead through the production of one’s own video or other media, I would argue that fair use does not support the right to mislead by using a copyrighted work to create a fake “factual” work.  The fair use doctrine, as codified in the Copyright Act of 1976, seeks to exempt unlicensed uses of protected works for purposes such as, but not limited to, commentary, education, news reporting, and parody.  

The fair use principle is court-made doctrine dating back to 1841 in the U.S., and we can bet the farm that no jurist anywhere has ever opined that a socially beneficial aim of this provision is the production of “false testimony.” (Judges are not fans of false testimony.)  And that is the only thing communicated by the doctored Pelosi video:  a false testimony that the Speaker was inebriated in the scene as depicted.  There is no discernible commentary or parody in the use.

In his seminal work on the much-debated “tranformativeness” doctrine, Judge Leval writes, “Can it be seriously disputed that history, biography, and journalism benefit from accurate quotation of source documents, in preference to a rewriting of the facts, always subject to the risk that the historian alters the ‘facts’ in rewriting them?”  This is in defense of making fair uses of a subject’s letters or diary entries, but it emphasizes the point that a foundational aim of fair use in a non-fiction context is to improve accuracy in reportage and editorial, not to obliterate it.

To make the distinction clear, a user may take a clip of a public figure speaking and slow down key sections for the purpose of emphasizing the statements he believes to be ridiculous, and that would be a form of commentary and, arguably, fair use.  But even this simple example is distinguishable from the Pelosi video, which contains no evidence of commentary but was presented as non-fiction work.

Given the inevitability of more fake video to come, some of which will rely on appropriations of existing material, the courts may need to recognize a standard of “false testimony” as an aim that is distinct from commentary, parody, etc.—a use that does not warrant the protection of fair use and should, therefore, be rejected without analysis under the four-factor test.

The Pelosi “Drunk” Video is Libel

When we view the Pelosi video as an example of  “false testimony,” it seems only reasonable to conclude that it is libelous.  And if it featured regular folk rather than an elected official, this would become readily apparent to the regular folk being smeared.  Politicians operate in a pejorative environment and are, therefore, immunized to an extent against many slings and arrows.  

But even though this video features the Speaker, this does not rescue the fact that it objectively makes a false statement posing as fact about an individual that could be damaging to reputation and career.  After all, if Elon Musk calling someone “pedo guy” on Twitter can potentially be libel, then a video falsely depicting someone engaging in disreputable or illegal conduct very likely meets that standard.

Section 230 of the CDA alleviates web platforms of any civil liability for knowingly continuing to host libelous material, but given the extent to which Facebook is lately twisting itself in knots seeking standards for content removal, perhaps adhering to the spirit of Section 230 would be helpful in that effort.  While the statute itself may be flawed, the clear intent of Congress was to encourage good-faith content moderation by site operators, and in that spirit, removing doctored material made with a clear intent to damage a reputation and mislead the public would seem to fit that particular bill.

The Pelosi “Drunk” Video Infringes First Amendment Rights

Calling the video a potentially “unfair use,” my friend and colleague Neil Turkewitz further notes that if a doctored video stands as “false testimony,” then maintaining its presence on a web platform like Facebook implicates the platform in the act of “compelled speech.”  Compelled speech is an infringement of an individual’s rights, and while Facebook is under no obligation to uphold the First Amendment, it can certainly elect not to participate in conduct that violates the principles of free expression in this manner.

Compelled speech and forced silence through intimidation are two overlooked downsides of internet culture when it comes to the general ebullience that these platforms have done wonders for the power of speech.  If you’ve seen the latest “deepfake” video samples showing static images of Einstein, Marylin Monroe, and the Mona Lisa transformed into talking motion pictures, it’s not hard to imagine how anyone may soon be the target of some personal vendetta.  And it’s a safe bet that any victims of such attacks will consider Facebook, or the hosting platform, responsible—maybe in Congress or maybe just in the market.

Guidance for Facebook et al?

We can assume that nobody will raise a copyright issue regarding the source material for the Pelosi clip and that Speaker Pelosi will not be suing anybody for libel or infringement of her speech rights, but I raise these topics because they could be relevant if the material used and the individual(s) maligned were only slightly different.  Meanwhile, as Facebook and other platforms try to develop new “community standards” that actually serve the community, it seems to me that existing law provides some rather handy guidelines. Perhaps as an exercise to hone its moderation practices, Facebook’s team might imagine that it is potentially liable for any of these transgressions and then decide how it would handle a similar video they knew to be fake.  As I say, ticking off three boxes—copyright infringement, libel, and infringing the individual’s speech right—is probably a good indication that the material should be taken down.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)