The Pelosi “drunk” video is not only disturbing, it’s probably illegal.

There should be little doubt that the video clip doctored to make Speaker Pelosi look drunk should be seen as a sign of new hazards to come in the digitally-enhanced war on reality.  The video is not even very sophisticated compared to what is actually possible right now with technology like “deepfakes,” and we can expect to see far more clever uses of fabricated video that are subtle enough to seem plausible—perhaps even fool experts before long. 

Moreover, it should be recognized that most of us have bigger public profiles than we would have had twenty years ago.  Replace the Speaker with a university scholar or artist or corporate executive that some disgruntled party wants to harm, and the relative ease of reputation destruction should be a chilling thought for anyone with a social media account and photos or videos of themselves online. (Show of hands?)

Regardless of where one nets out on Facebook’s handling of the Pelosi “drunk” clip—leaving it online with caveats that it is a fake—it should probably be viewed as an outlier in terms of guidance for content removal specifically because it involves a high-profile elected official and is, therefore, news itself that perhaps should be viewed in that context.  But the video also implicates three violations of law that Facebook could choose to find instructive to its evolving moderation policy.

For consideration, recognize that the Pelosi “drunk” video is intrinsically copyright infringement, libel, and an infringement of the subject’s first amendment rights.  Any one of these should recommend removal as the default choice for the platform, but checking all three boxes should be a no-brainer.  It should also be noted that doctored video used to malign individuals is a byproduct of a culture skewed by the misconception that every video, photo, etc. online is available for common use; and in this regard, the copyright analysis helps identify what the Pelosi video truly is in a legal sense.

Why the Pelosi “Drunk” Video is Not Fair Use

Were the maker of this video to be sued by the copyright owner of the source material, his counsel would no doubt try to defend the fake as “transformative” commentary or parody (and the folks at EFF might even hold their noses and write a supporting brief), but any court that would allow this defense to be considered would have to blind itself to the fact that the sole purpose of the use was to fabricate newsworthy evidence of an event that never happened.

While free speech protects the right to mislead through the production of one’s own video or other media, I would argue that fair use does not support the right to mislead by using a copyrighted work to create a fake “factual” work.  The fair use doctrine, as codified in the Copyright Act of 1976, seeks to exempt unlicensed uses of protected works for purposes such as, but not limited to, commentary, education, news reporting, and parody.  

The fair use principle is court-made doctrine dating back to 1841 in the U.S., and we can bet the farm that no jurist anywhere has ever opined that a socially beneficial aim of this provision is the production of “false testimony.” (Judges are not fans of false testimony.)  And that is the only thing communicated by the doctored Pelosi video:  a false testimony that the Speaker was inebriated in the scene as depicted.  There is no discernible commentary or parody in the use.

In his seminal work on the much-debated “tranformativeness” doctrine, Judge Leval writes, “Can it be seriously disputed that history, biography, and journalism benefit from accurate quotation of source documents, in preference to a rewriting of the facts, always subject to the risk that the historian alters the ‘facts’ in rewriting them?”  This is in defense of making fair uses of a subject’s letters or diary entries, but it emphasizes the point that a foundational aim of fair use in a non-fiction context is to improve accuracy in reportage and editorial, not to obliterate it.

To make the distinction clear, a user may take a clip of a public figure speaking and slow down key sections for the purpose of emphasizing the statements he believes to be ridiculous, and that would be a form of commentary and, arguably, fair use.  But even this simple example is distinguishable from the Pelosi video, which contains no evidence of commentary but was presented as non-fiction work.

Given the inevitability of more fake video to come, some of which will rely on appropriations of existing material, the courts may need to recognize a standard of “false testimony” as an aim that is distinct from commentary, parody, etc.—a use that does not warrant the protection of fair use and should, therefore, be rejected without analysis under the four-factor test.

The Pelosi “Drunk” Video is Libel

When we view the Pelosi video as an example of  “false testimony,” it seems only reasonable to conclude that it is libelous.  And if it featured regular folk rather than an elected official, this would become readily apparent to the regular folk being smeared.  Politicians operate in a pejorative environment and are, therefore, immunized to an extent against many slings and arrows.  

But even though this video features the Speaker, this does not rescue the fact that it objectively makes a false statement posing as fact about an individual that could be damaging to reputation and career.  After all, if Elon Musk calling someone “pedo guy” on Twitter can potentially be libel, then a video falsely depicting someone engaging in disreputable or illegal conduct very likely meets that standard.

Section 230 of the CDA alleviates web platforms of any civil liability for knowingly continuing to host libelous material, but given the extent to which Facebook is lately twisting itself in knots seeking standards for content removal, perhaps adhering to the spirit of Section 230 would be helpful in that effort.  While the statute itself may be flawed, the clear intent of Congress was to encourage good-faith content moderation by site operators, and in that spirit, removing doctored material made with a clear intent to damage a reputation and mislead the public would seem to fit that particular bill.

The Pelosi “Drunk” Video Infringes First Amendment Rights

Calling the video a potentially “unfair use,” my friend and colleague Neil Turkewitz further notes that if a doctored video stands as “false testimony,” then maintaining its presence on a web platform like Facebook implicates the platform in the act of “compelled speech.”  Compelled speech is an infringement of an individual’s rights, and while Facebook is under no obligation to uphold the First Amendment, it can certainly elect not to participate in conduct that violates the principles of free expression in this manner.

Compelled speech and forced silence through intimidation are two overlooked downsides of internet culture when it comes to the general ebullience that these platforms have done wonders for the power of speech.  If you’ve seen the latest “deepfake” video samples showing static images of Einstein, Marylin Monroe, and the Mona Lisa transformed into talking motion pictures, it’s not hard to imagine how anyone may soon be the target of some personal vendetta.  And it’s a safe bet that any victims of such attacks will consider Facebook, or the hosting platform, responsible—maybe in Congress or maybe just in the market.

Guidance for Facebook et al?

We can assume that nobody will raise a copyright issue regarding the source material for the Pelosi clip and that Speaker Pelosi will not be suing anybody for libel or infringement of her speech rights, but I raise these topics because they could be relevant if the material used and the individual(s) maligned were only slightly different.  Meanwhile, as Facebook and other platforms try to develop new “community standards” that actually serve the community, it seems to me that existing law provides some rather handy guidelines. Perhaps as an exercise to hone its moderation practices, Facebook’s team might imagine that it is potentially liable for any of these transgressions and then decide how it would handle a similar video they knew to be fake.  As I say, ticking off three boxes—copyright infringement, libel, and infringing the individual’s speech right—is probably a good indication that the material should be taken down.

Moderation in all things. Except perhaps social media.

Is it just me, or have the digital rights folks lately shifted the narrative on the subject of platform responsibility and content moderation?  Where once they could be counted on to repeat the commandment Thou Shalt Not Touch Online Content, I perceive a more nuanced (sounding) agenda now recommending Best Practices for Touching Online Content If You Really Must.

For instance, in a blog post of April 29, Jillian C. York and Corynne McSherry of the Electronic Frontier Foundation declared that content moderation on social media platforms is broken, outlining four key reasons why various attempts to-date have been fraught with problems—all descriptions I would not quarrel with per se.  Neither would I disagree with the following statement they post beneath the headline No More Magical Thinking:

We shouldn’t look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible. As with any system of censorship, mistakes are inevitable.  As companies increasingly use artificial intelligence to flag or moderate content … we’re inevitably going to see more errors. 

It is hard to refute the premise that a platform will have a very hard time moderating content without error.  But I would also contend that those who believe that satisfactory moderation guidelines can be developed at all (moderation in moderation, as Wilde would say) are still engaged in magical thinking.  Let’s face it.  Even before we finish the thought, the process has already stalled.  Error according to who?  

I personally could not care less if Facebook tosses Alex Jones, Louis Farrakhan, and Milo Yiannapolis into the digital oubliette, but that decision was considered an egregious error by many—not only fans of one member of that bizarre triad, but speech rights advocates and those who recommend keeping hate-mongers and other extremists in plain sight under the theory that it is better to know thy enemy.  

Regardless, while it is very easy for many of us to say good riddance to certain types of high-profile provocateurs, it is a far more complex challenge to define broadly-applicable terms recommending exactly why an Alex Jones should be removed.  And Jones is relatively simple compared to more subtle examples of what might still be deemed toxic speech.  Even perhaps Milo. But picking up on EFF’s reference to “Silicon Valley or anyone else,” it seems that some parties believe they can be the “international speech police” as long as they really really care about speech.  

The Global Digital Policy Incubator at Stanford, along with the group ARTICLE 19, recently released a broad report describing a working meeting held in February to develop what the group calls Social Media Councils (SMCs).  Responding to concerns that the platform companies cannot be trusted to self-regulate, and government regulators are likely to overreact and be tainted by political agenda, SCMs would operate independently—at either at a regional, national, or international level—to draft universal guidelines for moderation and, potentially, function as an appeals body to adjudicate alleged errors in content removal.

While the report does acknowledge the scale, complexity, and cost of implementing SCMs, I think it woefully underestimates the ambition of the whole concept, which I would compare to establishing a UN for the internet.  While there are several moving parts to the proposal begging for comment, I will presume to jump to the conclusion that the market itself, for better or worse, will change long before SCMs can ever be implemented.  But even if SCMs could be created and develop useful guidelines, this would do nothing to mitigate the underlying challenge that social media helps to foster completely false narratives perpetuated by perfectly reasonable people.

Social media’s harm to liberal democracies does not end with the excision of the most extreme and obvious purveyors of hate speech and incitements to violence. The harm to done by social media and other platforms is much more subtle than that, and thoughtful, decent people are often the conduits–if for no other reason than the fact that every exchange, no matter how innocuous, teaches the algorithm how to market to someone. If that’s marketing a product, no big deal; if it’s marketing a false narrative about public policy, it’s a very big deal.

As I have repeated multiple times, what concerned me most about the anti-SOPA campaign of 2011/12 was not the copyright issue, but the staggering effectiveness of hyperbole and misinformation. “This could be any issue,” was my first thought at the time, and indeed, we have now seen some of the worst effects that data-driven misinformation has had democratic countries around the world. Yet, it seems as though the EFF, Article 19, et al, in their recent efforts to recommend guidelines for moderation, are still clinging to a general belief that social media has largely been a positive force for democratic values. Rebecca MacKinnon, Director of Ranking Digital Rights, states

“While the internet and related technologies have indeed helped people circumvent traditional barriers to holding governments and powerful corporations accountable, they did not shatter as many walls as democracy and human rights activists once hoped and expected. Instead, daily headlines report how they make us vulnerable to mass surveillance, unaccountable censorship, disinformation, and viral hate speech that incites violence. Entirely new channels have been created for abusing power, in ways that we are still struggling to understand. In many places and on many issues, exercising and defending human rights has grown more difficult. “

I suspect that overall picture will not be improved simply by removing the Alex Joneses and terrorist recruiters from mainstream sites. Or for that matter by trying to mitigate erroneous removals of protected speech given that we’ve had about fifteen years of un-moderated social media, and it hasn’t done much good. Speaking personally, I have all but bailed from interacting on Facebook because I do not want to feed the machine any more data, though I do not imagine that tens of millions of people will suddenly feel the same. Short of that, it seems the only meaningful moderation has to come from users themselves. If social media platforms are going to remain filters of information and debate, then at least recognizing that they are opaque, manipulated, advertising and data-harvesting machines should foster a healthy skepticism. And that may prove more important than all the ambitious best-practices proposals any group can devise.

Platforms Wrestle With the Difficult After Years of Ignoring the Easy

A new, in-depth post by Mike Masnick at Techdirt correctly describes many of the challenges inherent to platform moderation of content. It was enough of a departure from his usual “anything goes” stance that he wrote a preamble acknowledging that he was likely to piss off a few readers. And it is, admittedly, a little bit fun to watch some of the web cheerleaders stumble these days as they try to walk back the utopian view that all content online is fundamentally free speech and that removal of anything is inherently censorship.

Now that the public conversation is less comfortable with “free speech” as a universal answer—beginning with Facebook taking money for political ads made by Russian agents—Masnick et al have little choice other than to engage in a more nuanced dialogue that at least begins with the premise that some platform responsibility is worth considering. His post highlights a few possible solutions to “bad” content, including his own proposal; and while I think he correctly describes the complex nature of content moderation by administrators, I’m not sure any of the solutions cited address the real problem. His highlights include the following:

Yari Rosenberg recommends counterprogramming, which is essentially responding to misinformation with facts at the point of user interaction. Tim Lee advocates down-ranking less credible sources that appear to be news. David French proposes that the platforms only remove libel and slander because these don’t require new legal definitions. And Masnick proposes that, for instance, Facebook abdicate its centralized control over filtering or adjusting its algorithm and instead cede that power to users to set parameters for what they want to see.

“And, yes, that might mean some awful people create filter bubbles of nonsense and hatred,” Masnick writes, “but average people could avoid those cesspools while at the same time those tasked with monitoring those kinds of idiots and their behavior could still do so.” To me, this statement implies that Masnick’s “protocols” solution is largely cosmetic, that it may result in us “average people” not seeing as much garbage, but it in no way alters the underlying model of “surveillance capitalism” and merely papers over the social disease whereby garbage continues to gain undue support and have undue influence in the mainstream. (This was discussed in my last post about the paper by Alice E. Marwick on why we share fake news.)

When YouTube and Facebook shut down the accounts of conspiracy nut Alex Jones’s Infowars last week, doubtless some cheered, others cried foul, and others warned that attempting to silence even the outrageous wack-jobs can turn them into martyrs and galvanize their cult-like followers into an even larger mob. This prediction is almost certainly correct and, thus, points to the real question I have, which is not whether Facebook should keep Jones off my feed to avoid offending me, but why so much outright garbage information is currently playing such an outsized role in the social and political narrative of the United States?

I can see how some of the solutions Masnick mentions, including his own, might diminish some of the low-level sharing of junk news by “average” thoughtful people, but none of these proposals tackles the big social phenomenon itself — that the internet has been the catalyst for elevating toxic misinformation to an unprecedented level of tangible influence. The crazies who used to be conveniently segregated by geography (the proverbial idiots in every village) can now coalesce in cyberspace, finding strength in numbers, reinforcing their “deep stories,” (to use Alice Marwick’s term), and taking tangible action in the streets or at the polls.

So, while the tech pundits and the internet companies look for (or pay lip-service to looking for) technological responses to these social ills, the underlying reasons why we are suddenly reacting to “bad” content and putting pressure on the major platforms may not actually be addressable—either by the companies simply removing content or by public policy that attempts to parse hate speech and other highly-subjective concepts.

Masnick is not wrong that the task of editing speech by the platforms is extremely difficult, which is presumably the main reason he advocates putting that control in the hands of users. As I say, I’m ambivalent about this approach because I think the end result will be the same—increased credibility for outright crazy shit via one portal or another. If there is an antidote to that problem, I strongly suspect it is not technological but human. But, at least even the tech-utopians now have to acknowledge that treating all online content as sacred has had some very negative consequences, so perhaps we can now have a different discussion about content that would not be protected speech in any context.

For those of us who have advocated platform responsibility for quite some time, it is amusing, if not frustrating, to watch the industry wrestle with the truly difficult issue of moderation after years of refusing to compromise on the comparatively simpler issue of removing material that is patently illegal. For instance, weeding out material that infringes copyright, or which a court has held to be libelous or otherwise harmful to a claimant, is much easier than deciding when it’s okay to remove or demote “bad” speech. Yet the major platforms, along with considerable help from opinion-makers like Masnick, have historically responded to the proposed removal of unprotected or illegal content as a prelude to “rampant censorship” and the destruction of all that is beautiful about the internet.

This recent shift in posture implies two things in my view: the first is that the platforms can indeed be more cooperative in responding to illegal content without damaging the benefits of the internet; and the second is that those benefits have never been all they’re cracked up to be. Admitting to the latter would go a long way toward reframing a more rational discussion about the former.