The Pelosi “drunk” video is not only disturbing, it’s probably illegal.

There should be little doubt that the video clip doctored to make Speaker Pelosi look drunk should be seen as a sign of new hazards to come in the digitally-enhanced war on reality.  The video is not even very sophisticated compared to what is actually possible right now with technology like “deepfakes,” and we can expect to see far more clever uses of fabricated video that are subtle enough to seem plausible—perhaps even fool experts before long. 

Moreover, it should be recognized that most of us have bigger public profiles than we would have had twenty years ago.  Replace the Speaker with a university scholar or artist or corporate executive that some disgruntled party wants to harm, and the relative ease of reputation destruction should be a chilling thought for anyone with a social media account and photos or videos of themselves online. (Show of hands?)

Regardless of where one nets out on Facebook’s handling of the Pelosi “drunk” clip—leaving it online with caveats that it is a fake—it should probably be viewed as an outlier in terms of guidance for content removal specifically because it involves a high-profile elected official and is, therefore, news itself that perhaps should be viewed in that context.  But the video also implicates three violations of law that Facebook could choose to find instructive to its evolving moderation policy.

For consideration, recognize that the Pelosi “drunk” video is intrinsically copyright infringement, libel, and an infringement of the subject’s first amendment rights.  Any one of these should recommend removal as the default choice for the platform, but checking all three boxes should be a no-brainer.  It should also be noted that doctored video used to malign individuals is a byproduct of a culture skewed by the misconception that every video, photo, etc. online is available for common use; and in this regard, the copyright analysis helps identify what the Pelosi video truly is in a legal sense.

Why the Pelosi “Drunk” Video is Not Fair Use

Were the maker of this video to be sued by the copyright owner of the source material, his counsel would no doubt try to defend the fake as “transformative” commentary or parody (and the folks at EFF might even hold their noses and write a supporting brief), but any court that would allow this defense to be considered would have to blind itself to the fact that the sole purpose of the use was to fabricate newsworthy evidence of an event that never happened.

While free speech protects the right to mislead through the production of one’s own video or other media, I would argue that fair use does not support the right to mislead by using a copyrighted work to create a fake “factual” work.  The fair use doctrine, as codified in the Copyright Act of 1976, seeks to exempt unlicensed uses of protected works for purposes such as, but not limited to, commentary, education, news reporting, and parody.  

The fair use principle is court-made doctrine dating back to 1841 in the U.S., and we can bet the farm that no jurist anywhere has ever opined that a socially beneficial aim of this provision is the production of “false testimony.” (Judges are not fans of false testimony.)  And that is the only thing communicated by the doctored Pelosi video:  a false testimony that the Speaker was inebriated in the scene as depicted.  There is no discernible commentary or parody in the use.

In his seminal work on the much-debated “tranformativeness” doctrine, Judge Leval writes, “Can it be seriously disputed that history, biography, and journalism benefit from accurate quotation of source documents, in preference to a rewriting of the facts, always subject to the risk that the historian alters the ‘facts’ in rewriting them?”  This is in defense of making fair uses of a subject’s letters or diary entries, but it emphasizes the point that a foundational aim of fair use in a non-fiction context is to improve accuracy in reportage and editorial, not to obliterate it.

To make the distinction clear, a user may take a clip of a public figure speaking and slow down key sections for the purpose of emphasizing the statements he believes to be ridiculous, and that would be a form of commentary and, arguably, fair use.  But even this simple example is distinguishable from the Pelosi video, which contains no evidence of commentary but was presented as non-fiction work.

Given the inevitability of more fake video to come, some of which will rely on appropriations of existing material, the courts may need to recognize a standard of “false testimony” as an aim that is distinct from commentary, parody, etc.—a use that does not warrant the protection of fair use and should, therefore, be rejected without analysis under the four-factor test.

The Pelosi “Drunk” Video is Libel

When we view the Pelosi video as an example of  “false testimony,” it seems only reasonable to conclude that it is libelous.  And if it featured regular folk rather than an elected official, this would become readily apparent to the regular folk being smeared.  Politicians operate in a pejorative environment and are, therefore, immunized to an extent against many slings and arrows.  

But even though this video features the Speaker, this does not rescue the fact that it objectively makes a false statement posing as fact about an individual that could be damaging to reputation and career.  After all, if Elon Musk calling someone “pedo guy” on Twitter can potentially be libel, then a video falsely depicting someone engaging in disreputable or illegal conduct very likely meets that standard.

Section 230 of the CDA alleviates web platforms of any civil liability for knowingly continuing to host libelous material, but given the extent to which Facebook is lately twisting itself in knots seeking standards for content removal, perhaps adhering to the spirit of Section 230 would be helpful in that effort.  While the statute itself may be flawed, the clear intent of Congress was to encourage good-faith content moderation by site operators, and in that spirit, removing doctored material made with a clear intent to damage a reputation and mislead the public would seem to fit that particular bill.

The Pelosi “Drunk” Video Infringes First Amendment Rights

Calling the video a potentially “unfair use,” my friend and colleague Neil Turkewitz further notes that if a doctored video stands as “false testimony,” then maintaining its presence on a web platform like Facebook implicates the platform in the act of “compelled speech.”  Compelled speech is an infringement of an individual’s rights, and while Facebook is under no obligation to uphold the First Amendment, it can certainly elect not to participate in conduct that violates the principles of free expression in this manner.

Compelled speech and forced silence through intimidation are two overlooked downsides of internet culture when it comes to the general ebullience that these platforms have done wonders for the power of speech.  If you’ve seen the latest “deepfake” video samples showing static images of Einstein, Marylin Monroe, and the Mona Lisa transformed into talking motion pictures, it’s not hard to imagine how anyone may soon be the target of some personal vendetta.  And it’s a safe bet that any victims of such attacks will consider Facebook, or the hosting platform, responsible—maybe in Congress or maybe just in the market.

Guidance for Facebook et al?

We can assume that nobody will raise a copyright issue regarding the source material for the Pelosi clip and that Speaker Pelosi will not be suing anybody for libel or infringement of her speech rights, but I raise these topics because they could be relevant if the material used and the individual(s) maligned were only slightly different.  Meanwhile, as Facebook and other platforms try to develop new “community standards” that actually serve the community, it seems to me that existing law provides some rather handy guidelines. Perhaps as an exercise to hone its moderation practices, Facebook’s team might imagine that it is potentially liable for any of these transgressions and then decide how it would handle a similar video they knew to be fake.  As I say, ticking off three boxes—copyright infringement, libel, and infringing the individual’s speech right—is probably a good indication that the material should be taken down.

So Long Facebook (Mostly)

At a recent gathering of college alumni, a friend asked, “Is it me, or are you less active on Facebook these days?”  He was right.  I have all but bailed on the platform.  As a practical matter, it was just becoming a big time-suck; and as we all know from experience, engaging via social media doesn’t only occupy the measurable time spent reading, lurking, or discussing, but rather its reverb continues well into our valuable subconscious time while our brains continue to mull, resolve, or curse the impressions and interactions we experienced hours, days, weeks, or even years ago.  (In my case, it’s a lot of cursing of late.)

With regard to politics (the primary substance of Facebook), I feel amply-enough supplied with reasons to be angry at the circus we politely call the Trump administration that I really don’t need a steady litany of memes to keep feeding me variations on that theme—to say nothing of the occasional false narratives.  Sure, one out of every 20 or so memes is funny or clever enough to share, but whatever.  When the anti-copyright nuts in Europe declared that Article 13 of the EU Directive would “end memes,” it was a lie; but it was also a provocation to which I think the only sane response is, Who gives a damn? 

The under-examined psychological effect of constantly absorbing millions of impressions is just one reason to reject the historic—and until recently, popular—premise that social media platforms represent some previously-untapped genius of the demos (a.k.a. the wisdom of crowds).  But it was upon this crumbling rock that the major internet companies built their church, evangelizing the message that everything online is speech and, therefore, whenever an iota of content is removed, an angel dies.

Of course, this was a double-lie.  Not only because the premise was flawed and plenty of online “content” is not protected speech (even by U.S. law), but because the major web corporations are demonstrably not the harbingers of nascent democracy in authoritarian nations or even the champions of these values in extant liberal democracies.  Facebook, Google, et al comply with censorship in foreign markets; and domestically, they manipulate, remove, or prioritize “information” in an ongoing effort to retain user attention for as long as possible—all in the service of advertising revenue.     

In an older post, I wrote that we are like ants in Zuckerberg’s farm, but that metaphor isn’t right because ant farms are not experimental.  They’re just a way for a kid to observe ants being ants; they do not inherently change ant behavior.  More accurately, as Facebook users, we are voluntary lab rats in a grand experiment whose effects are not fully understood, although it is at last being discussed that perhaps the heightened volume of ugliness in our contemporary politics has something to do with this relatively new means of “connecting” to one another.

Doc Film Exposes Facebook’s Underbelly

A new documentary film called The Cleaners, first aired on PBS this week, provides a glimpse into a truly dark component of Silicon Valley’s ebullient—and arrogant— posturing as global champions of freedom and smiley emojis.  Far from the comfort of your laptop, and even farther from the glossy playground of Facebook headquarters in Menlo Park, a young woman walks among the garbage scavengers in her squalid Manila and tells us that scavenging would be her fate, too, if she were not a Facebook moderator.  She is just one of hundreds of worker bees to whom Facebook has outsourced millions of decisions per week as to what should or should not be seen on its platform.  As NPR’s Ari Shapiro in a story about the film puts it …

“Manila [capital of the Philippines] was a place where the analog toxic waste was sent from the Western world, has been sent there for years on container ships. And today the digital garbage is brought there. Now thousands of young content moderators in air conditioned office towers are clicking through the infinity [sic] toxic sea of images and tons of intellectual junk.”

With assigned quotas to process 25,000 posts per day, these Filipino moderators represent the majority of Facebook’s outsourced workforce tasked with rejecting content that does not comply with “community standards.”  They review and remove the most depraved material—the murders, child rapes, and tortures that would otherwise seep into our relatively benign feeds.  “I have seen hundreds of beheadings,” says one young man who spends most of his time sifting through terrorist content like ISIS videos. 

The filmmakers Hans Block and Moritz Reiswieck, in just about an hour and twenty minutes, provide several points of view from which to consider this bleak cubicle of the world’s most populated social media platform.  Some of the moderators see themselves as crusaders, literally keeping “sin” off the web, with one young man comparing his mission to that of Philippine President Duterte’s alleged policy of slaughtering the nation’s drug addicts.  At the other extreme, some of the moderators suffer PTSD from chronic exposure to so many images of horror, leading to at least one suicide highlighted in the film.  

In one segment, former Googler Tristan Harris, now at the Center for Humane Design, describes how in Myanmar, Burma, “Facebook is their internet reality, and it’s literally feeding a genocide without any accountability.”  He explains how the circumscribed nature—the “walled garden”—of Facebook perpetuates hatred and violence against the Rohingya refugees in that nation as, perhaps, the most extreme example of what we comfortably refer to here as the “filter bubble.”  A self-fulfilling information feedback loop resulting in mass murder, torture, and rape. 

Harris says it is a misconception that these technologies are neutral at all.  The goal of their design is to grab and hold attention, “and outrage is really good at doing that,” he says.  “The whole environment is tuned to offer us the worst of ourselves.”  And though we are certainly not committing genocide in the U.S., it is impossible to believe that a much lighter version of this information “filter bubble” is not constituent to a bunch of idiot civilians loading up their weapons to confront the “invasion” of refugees heading toward the U.S. border from Central America. 

The Cleaners does an excellent job of balancing intersecting narratives—focusing at times on the individual moderators (arguably Silicon Valley’s version of foreign sweat-shop labor), and also on the broader subject of censorship and who is making these decisions.  For instance, the film briefly explores the social media excommunication of artist Illma Gore, who painted the “small penis” nude of Donald Trump that went mega-viral in February 2016.  (Gore was physically attacked by thuggish Trump supporters in L.A., and the image was the target of a wrongful use of the DMCA takedown process.)  

Technically a violation of Facebook’s terms of service and clearly offensive to many, Gore’s painting is also unquestionably artistic expression and undeniably protected speech in the United States.  So, if indeed a Filipino moderator made the decision to remove her image from the platform, as the film implies, that’s fine within the context of Facebook’s right to maintain its “community standards,” but we must then insist that these companies stop appealing to the First Amendment and pretending to be neutral providers of public fora.  

This is not to understate the reality that the current president overtly thrives on false narratives, but if the political stylings of Donald Trump appear to be an unprecedented shock to the system because they reject the norms of statesmanship, I recommend David Lowery’s latest post on The Trichordist reminding us how Silicon Valley giants, under the not so watchful eye of the Obama administration, managed to leverage “a strange mix of anti-establishment lefties, right libertarians, social progressives and lots and lots of corporate money,” into a coalition that fundamentally advocated the end of statehood itself.  Hence, the erosion of democratic principles as embodied in the Constitution hardly begins with Trump.  He has certainly exploited the hell out of this trend, but its origins may be found in the hippie/libertarian crucible of the internet industry.

More mundanely, for me personally, Illma Gore’s painting is also a pretty good example of why I can hardly look at Facebook anymore.  As I say, there are enough substantive, and very serious, reasons why I think everyone across the left-right spectrum should recoil at the politics of Donald Trump; and references to his alleged dick size are as counter-productive as they are obvious.   In this regard, a steady stream of non-informative, yet provocative, inputs has to have a psychological effect that is generally and collectively negative.  At the very least, it is exhausting and depressing. 

If the “cleaners” in Manila, suffer potent symptoms from exposure to high doses of truly hideous imagery, I suppose we must assume that a more subtle version of this psychosis occurs in our brains through low-dose exposure to the merely bad—or the misleading.  And so, as much as it can be fun to post comments (usually political) on Facebook and receive that little dopamine hit from “Likes” etc., I find it impossible to shed the awareness that, in very small way, I am playing the role of lab rat in an experiment that, from all available evidence, is not making the world better.

The problem is not how much or how little material is online—there are over two billion websites with trillions of files being constantly uploaded and removed every year—the problem is the distorted view of the world as seen through these mesmerizing kaleidoscopes of information.   And with that said, I shall publish this post and then, yes, share it via social media, including Facebook.  This irony is not only blatantly apparent, but it is the premise of a different discussion about the possible penalties for non-participation with social media platforms, which promote the active over the less active.  How this might affect market opportunities for people in the near future is a question worth asking, but in another post.


On related topics, see the cleverly titled blog The Illusion of Volition by Sarah T. Roberts, who also appears in The Cleaners.

Platform Responsibility? How about starting with legal content?

It may be hip these day to talk about platform responsibility, but just a couple years ago, there were no mainstream conversations about how the operations and policies of online service providers might be enabling misinformation, hate speech, propaganda, etc. And while mea culpas from Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey make headlines, and Google tries to pitch the general message that “we’re all in this together,” my more cynical self wonders whether these service providers are just waiting out the news cycle. Waiting until we grow weary of this new discussion, which just happens to be focused on some of the most difficult (if not intractable) questions, like where to draw lines on protected speech.

As alluded to in this post, it is my personal theory that if the major service providers do not change their policies, practices, and rhetoric with regard to illegal content—or support of illegal content—then all this chatter about finding balance in the realm of protected speech is just pandering noise that will soon die down. I do not doubt that Zuckerberg, Dorsey, et al feel personally conflicted about the role their platforms have played in elevating rank divisiveness into the mainstream of political discourse; but when these guys, and other representatives of OSPs say things like “We have to do better,” I can’t help but think of the litany of cases in which internet companies have fought against complying with established legal principles at every turn.

I think of Google fighting a Canadian Supreme Court order in Equustek v. Google to delist links to a counterfeit product supplier. Or Yelp in Hassell v. Bird refusing to remove a review that a court held to be libelous. Or the fact pattern in BMG v. Cox Communications which revealed a systemic policy whereby the OSP avoided compliance with the terms of the DMCA. Or even Viacom v. YouTube, which, though settled without trial, revealed a similar fact pattern of knowingly enabling users to infringe copyrights. Or one of my favorite moments in internet hubris: Reddit’s hand-wringing, apologetic rationale for removing a subreddit that was hosting stolen nude photos of celebrities, who happened to be victims of a hacker.

Not one of the cases alluded to above involves protected speech, yet the responses have all been variations on the same theme: that removing anything from the web can only be a slippery slope toward “censorship.” And despite the fact that these, and other examples, generally entail unprotected, illegal content, we are now suddenly expecting the OSPs to grapple with the more complicated matter of monitoring legal speech and to do…something…as a matter of principle. Don’t get me wrong. A change in attitude would be welcome in so many ways. But if the major platforms cannot first amend their practices with regard to illegal material, I am highly doubtful they will come anywhere near striking the balance that everyone who is now having the “responsibility” conversation says is so essential.

In a panel discussion about platform responsibility hosted yesterday by the Technology Policy Institute, Daphne Keller of the Center for Internet and Society said that she “did not want to return to the copyright wars” in context to the discussion now being had. That’s her prerogative, of course, but copyright infringement is probably the vanguard issue that is most instructive to this moment of internal and external consideration of what platform responsibility actually means. Two decades worth of policies adopted by the major OSPs to first profit from copyright infringement and then seek to reshape copyright law itself in the courts, in academia, and in the public sphere reveal the sense of “responsibility” these companies have felt toward the people they have been exploiting. And of course when the exploited complained they were told they were wrong—that they did not understand the future.

In fact, in yesterday’s panel, I believe it was Keller who alluded to the “false dichotomy” that pits technology against rightholders, but let us not forget the origin of that bullshit narrative. Because it didn’t come from the rightholders. Shall we do a search for all the editorials posted by Techdirt, by EFF, by Lessig and Lefsetz—by copyright critics large and small—who have labeled creative rightholders as technology Luddites “clinging to old models”? That’s not the copyright owner’s narrative, it’s Big Tech’s narrative. So, if there is a false dichotomy, which now demands clarification, it ought to be recanted by the liars who wrote it and are still repeating it. That would be taking responsibility.

Interestingly enough, as a former Associate General Counsel for Google, Keller worked on the aforementioned Equustek case, and in June of 2017, she wrote a blog post for CIS in which she labeled the Canadian Court order that Google remove search results globally as an “ominous” proposal. In simple terms, this was a case in which a counterfeit business infringed Equustek’s trade secrets and then sold knock-off products via multiple sites on the web. Equustek sought and won a court order to remove the counterfeiter’s sites globally from Goolge’s search results.

I cite this example because it is comparatively straightforward. The legit company deserves the business earned by its products; consumers deserve to know what they’re buying and from whom; and there is no speech protection for trade in counterfeit goods. Equustek is also instructive because there is a clear parallel between its prayer for injunctive relief and, say, the motion picture industry’s efforts to have Google delist or demote major pirate sites, which are also not protected speech. Yet, in her 2017 post, Keller sums up the “ominous” nature of the Canadian Court order thus:

“Canada’s endorsement of cross-border content removal orders is deeply troubling. It speeds the day when we will see the same kinds of orders from countries with problematic human rights records and oppressive speech laws. And it increases any individual speaker’s vulnerability to laws and state actors elsewhere in the world. Content hosting and distribution are increasingly centralized in the hands of a few multinational companies – Google, Facebook, Apple, Amazon and Microsoft with their web hosting services, etc. Those companies have local presence and vulnerability to formal jurisdiction and real world threats of arrest or asset seizure in scores of countries.”

Apropos that first sentence, Keller asks rhetorically in the same post, “Can Russia use its anti-gay laws to make search results unavailable to Canadians?” I have two responses to this: the first is No, because the hypothetical, Russian court order would violate both Canadian and American law, which is not the case in Canada’s order to Google in Equustek. Keller, who is really citing Canada’s Michael Geist, falsely alleges that the defendant in Equustek is disseminating protected “speech and information,” which is not the case because the content is infringing and misleading in a manner that could be construed as fraudulent.

My second response is to mention that the policy view Keller seems to advocate—that the rule of law just doesn’t work in cyberspace—is exactly how we arrived at the moment in history when the Russian government is in fact exporting its agenda to the U.S. by using our own speech rights against us on social media. The Geist/Keller example of the Russian court order is pure hypothetical hysteria, but the phenomenon in which paid Russian hackers are fomenting anti-gay, and other hateful sentiments, to ratchet up divisiveness in the U.S. is a verified reality. I happen to think this makes pretty compelling evidence that the rule of lawlessness in cyberspace hasn’t worked out so well, but perhaps that’s just my inner Luddite talking.

So, although the topic of platform responsibility may be trending right now, I maintain some doubt that the OSPs can, or even should, try to protect society against the social and political effects of problematic information. That topic may be what sparked the conversation, but the complexity of that challenge, as it is currently framed, may wind up allowing the service providers to revert to the status quo, in which they moderate almost nothing and monetize almost everything.

Instead, taking on the less-challenging task of actually mitigating illegal content—copyright infringement, harassment, counterfeiting, trafficking, libel, etc.—does not require platform administrators to wade into the murky complexities of moderating speech. So, if they really mean it when they say, “We have to do better,” they can certainly start by complying with reasonable court orders and working with—rather than against—key stakeholders seeking a more lawful internet ecosystem.


Photo by David Crockett