YouTube Bans Gun Videos. Raises Difficult Questions.

While Austin, TX was still searching for its serial bomber, various guests on CNN were of course speculating about the assailant’s level of expertise (perhaps even formal training) due to the technical sophistication of some of the explosive devices. Cynically, I thought, “Or he has YouTube.”

For years, the internet industry, led by the major platforms, has invoked free speech as a rationale for taking a hands-off approach to the content on their sites. This has included content that is intrinsically illegal, like copyright infringing material, or content that fosters criminal activity, like demos in computer hacking, terrorist propaganda, or drug-trafficking.

Beginning with the advertisers drawing a line in the sand in early 2016, economic pressure to clean up the platforms collided a year later with political pressure as both citizens and lawmakers finally decided that the major sites can be held at least somewhat responsible for the user-generated content (UGC) on their sites. And it turns out that under all this pressure, the companies seem to be discovering capabilities they previously claimed were untenable—though it remains to be seen whether they will make decisions that are both coherent and socially beneficial.

This past week, YouTube announced that it would remove and/or bar certain firearm-related videos, which will, no doubt, be welcome news for many Americans as the groundswell demanding better gun-control gains momentum due largely to the energy of the Parkland students. But my friend and colleague, Devlin Hartline at the Center for the Protection of Intellectual Property, was quick to notice a hypocrisy in YouTube’s decision—namely, that the company would be banning at least some videos that are neither illegal nor promoting illegal activity, even implicating two constitutional rights at the same time. Here’s his tweet:

Writing as someone who is hostile toward Second Amendment maximalism, I also cannot dispute Hartline’s observation, especially as a colleague who is likewise opposed to sites still monetizing outright illegal and highly-toxic content under a blanket claim of platform neutrality. But I want to look past the complexly-heated gun issue—and even the copyright issue—to consider how this recent decision by YouTube highlights the mercurial nature of the major sites, and how this frustrates efforts toward a coherent cyber policy.

The Chameleon Sites

The major UGC platforms—e.g. Google, YouTube, Facebook, Twitter, Reddit—are chameleons with at least three bold colors, “Community,” “Commons,” and “Brand”; and one diluted color, “Corporation.” For the sake of simplicity, I’ll define these terms as follows:

Community

As private entities comparable to physical retail environments, these sites are entitled to foster any environment they want—gun-free or panda free, if they choose—and users will either join and access the platform or they won’t. In this market-based context, the website is serving as a “Community” and owes no fealty to free speech or most other constitutional rights.

Commons

Although none of these sites is a public entity, there is arguably an extent to which a limited number of platforms have become the primary public fora for communication, news, exchange of ideas and information, and commerce. As a result, the notion that these sites represent a “Commons” has long been supported by user sentiment. But this sensibility has allowed the platform owners to appeal to the first amendment as a rationale for taking almost no responsibility for managing content, including some of the paid advertising.

Brand

Presumably, YouTube’s recent gun-video ban is an example of a PR decision in which the company has decided (correctly or not) that being on the wrong side of the current trend is bad for the “Brand.” This type of decision corresponds to the “Community” model but is anathema to the “Commons” model.

Corporation

I refer to this as a diluted color because too often, we have a habit of ignoring it, of talking about “our internet” as though that notion is coextensive to the business interests of Google et al. Meanwhile, many of the invisible decisions—like Facebook authorizing a developer who ends up abusing our data—are profit-based, “Corporation” decisions that are presented to users as supporting either the “Community” or the “Commons.” (Think “fun” personality quiz you share with friends but which is actually mining your data in order to manipulate your political views.)

Rethinking Liability & Responsibility

The two statutory liability protections written in the late 1990s—Section 230 of the CDA, and Section 512 of the DMCA—were not intended to foster a “Commons” model per se. They were simply meant to shield as-yet undefined platforms from liability for the unlawful conduct of as-yet undefined types of users. But because certain major platforms today have qualities akin to a “Commons,” those liability protections are easily conflated with the speech rights of users, which galvanizes the liability shields in a manner that allows these sites to have it both ways—to monetize everything as private entities while appealing to the illusion that they are public entities.

Further exacerbating the semantic confusion, “the internet” is usually described as a single entity from a policy perspective. This has allowed the biggest, wealthiest, and most technologically capable providers to entrench a non-liability paradigm by rhetorically citing the interests of independent, small, and start-up enterprises. This is an unreasonable analysis when the total number of major platforms—the ones that could arguably be considered a “Commons”—is just a handful of entities in contrast to the roughly one-billion sites online.

Historically, those of us in the copyright fight have watched the big platforms color-shift between the rhetoric of “Community” and the rhetoric of “Commons,” depending on which identity best serves their interests in the moment. So, if new federally-mandated guidelines are called for in the wake of all the Facebook fallout—and this seems quite possible—perhaps one starting point is to address the “Community/Commons” dichotomy. Because it seems to me that new policies need not address the entire internet, when perhaps fewer than a dozen sites can reasonably be described as semi-public.

The Challenge of Semi-Public Spaces

Hartline is a law and policy expert who knows full well that YouTube doesn’t actually have to answer to the First or Second Amendment, but his response to the new gun-video ban is reflective of how most users tend to feel if otherwise legal material they post online is removed. If I post a link to this blog on Facebook, and it’s removed, I can’t sue the company for speech infringement, but that doesn’t mean my speech would not be infringed solely because Facebook owns so much of the market of potential readers.

At the same time, I would not necessarily know if the removal was done to serve “Community,” “Brand,” or “Corporation,” but for sure, the removal would belie any pretense that the platform is a “Commons.” Conversely, the more a platform like Facebook chips away at the illusion that it is a “Commons” (i.e. controls more content), the more people are likely to abandon the site in a massive self-fulfilling prophecy, taking “Community,” “Brand,” and “Corporation” down into the oubliette with MySpace.

We have to acknowledge that there truly is no historic precedent for so profound a merger of public and private interests as these particular platforms. There is some case law precedent in the analog world in which private property functions as public space and, thus, courts have limited an owners’ ability to prohibit speech. The shopping mall naturally comes to mind, but neither a mall nor any other semi-public, physical space was designed with the purpose of hosting expression, let alone the expression of millions of people in the same venue.

Addressing the “Commons/Community” dichotomy could help contextualize the intent of the existing statutory framework on site liability, which over-broadly lumps “the web” into one big policy bucket. Gaps in the legislative language have too-often allowed the major sites with the most public influence to behave as chameleons in both the courts and the court of public opinion. Thus, the major platforms have a history of rejecting and criticizing even statutorily-mandated systems to mitigate abuse—all in the name of protecting user speech, which they are not in fact obligated to protect.

So, regardless of individual views about guns and gun control, Hartline is correct to observe that this latest decision by YouTube reveals that our social and legal relationship to the major platforms remains bipolar at best. Hence, before any reasonable policy changes can emerge, it seems like the next step is to define the terms that more-accurately describe the web we have instead of the web we expected some twenty years ago.


ADDENDUM:  At the moment of publication, a colleague sent a link to this article by Eriq Gardner at Hollywood Reporter.  Court holds that YouTube is not a public forum, which it isn’t.  But that doesn’t wholly satisfy the challenge.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)