On September 30, the House Judiciary Committee held a hearing to discuss the Copyright Office report, published in May, commenting on the efficacy of Section 512 of the Digital Millennium Copyright Act (DMCA). Section 512 provides conditional immunity to online service providers for copyright infringements conducted by users of their services. (For a basic summary of conditions, see page here.)
Reiterating the position that the USCO report fails to consider the interests of the general public in its analysis, Meredith Filak Rose, senior policy counsel at Public Knowledge, urged the committee to proceed with cautious awareness that in the years since 1998, the public has become profoundly dependent upon the internet for a broad range of ordinary and essential needs.
With due respect to Rose personally, and with deference to the many devil’s details implicated by her testimony, I shall, once again, take issue with the over broad context in which digital rights groups like Public Knowledge try to frame discussion about the DMCA. For instance, at the start of her testimony Rose states that, “229 million Americans use the internet each day. That’s 229 million American adults using the internet to work, worship, connect with family and friends, receive healthcare, consume and discuss the news, and organize political action each and every day.”
Aside from the fact that the mosaic of internet uses needs to be more diverse in order to present a clear picture (let’s not forget the mindless scrolling, the clickbait, the misinformation, or the porn), the salient point is that most ordinary internet use does not require the appropriation of copyrighted works. So, framing a conversation about a section of the copyright law by alluding to the scope of everyday internet traffic is both distracting and entirely beside the point. If Congress were discussing CAFE standards, and an oil industry representative testified that 229 million American adults drive to work, church, and the grocery store every day, this would be a meaningless prelude to an argument against mandates for more fuel-efficient cars.
The Fight Over Account Termination
So, let’s stipulate the obvious: We all use the internet for myriad practical purposes all day long. And if anyone can show me the intersection between copyright infringement and a telemedicine appointment, I’ll take a look. But what Rose is really teeing up is advocacy for the status quo of DMCA §512(i) and the barely implemented requirement that ISPs eventually cancel the accounts of repeat copyright infringers. We cannot reconcile, Public Knowledge argues, a family’s fundamental need for broadband with the possibility that a teenager in the house might repeatedly infringe copyright, and the service provider will be required to terminate access for the entire household.
But the reality is not quite so binary or draconian, even if the statute has proven unclear to the point of futility. Congress’s decision in 1998 not to define “repeat infringer,” or to codify universally applicable guidance for termination policies, left the ISPs (access providers) and the edge providers (web platforms) free to maintain the practice of termination avoidance for repeat infringement by users. The concern of digital rights groups, therefore, is that somehow the service providers will have to comply with a 22-year-old condition they’ve largely evaded.
In the costly litigation COX v. BMG and that provider’s risible 14-strike policy, COX’s users received multiple warnings before not actually losing their accounts. And although copyright owners would certainly like to see more meaningful implementations of 512(i), they neither propose nor endorse a scenario in which a family wakes to find its broadband inexplicably terminated for repeat infringements of which the account holder was somehow unaware. This is not the way account termination happens now or has ever been envisioned to happen.
At the same time, although this is not the post for offering specific legislative recommendations, one policy that would alleviate some of the tension in 512(i) is site blocking, which has proven effective in foreign jurisdictions. If groups like Public Knowledge, EFF, et al were not so adamantly opposed to blocking enterprise-scale, foreign-based piracy sites, a compromise might be more easily found that would mitigate many of the concerns these groups identify with regard to account termination scenarios.
“Red Flag” Knowledge at the Heart of the Matter
This focus on the internet writ large reinforces the major internet companies’ efforts to conflate their commercial interest with the public interest. What many call the “free flow of information,” allegedly for our benefit, often has nothing to do with information. What this erudite sounding expression really means is that because the social sites are engineered to exploit vulnerabilities in human psychology in order to keep users addicted and active, the platform owners like to avoid legal obstacles like copyright, privacy, or anti-trust matters that may create friction between user and interface.
Consequently, today’s major platforms—all founded years after the DMCA was first hammered out between big telco and big media—read certain ambiguities in the statutes to mean that they are free to profit from chronic infringement by users, while doing the bare minimum to comply with the notice-and-takedown provision. Specifically, as discussed in my post about the first Senate-led review of DMCA, rightsholders hope that Congress will more clearly define §512(c), which states that providers will not be liable for infringement if …
(1) its operators do not have actual knowledge of infringement; (2) its operators are not aware of facts or circumstances from which infringing activity is apparent; and (3) upon obtaining knowledge of infringement, expeditiously removes the relevant material.
Commonly referred to as the “red flag” knowledge section of the statute, a major point of contention for rightsholders, both in and out of court, is the extent to which service providers allege that they lack any knowledge of infringement sufficient to meet the liability standard. Even in a relatively recent case where plaintiffs presented emails that revealed site operators made affirmative decisions to leave material online they believed to be infringing, courts have misread §512(c) to mean that these operators would need legal and industry expertise to meet the “red flag” bar. This is inconsistent with the reasonable, ordinary person context in which this part of the statute was written, hence the hope by rightsholders that Congress will consider clarifying the language.
Because §512(c) is at the heart of the good-faith/shared responsibility intent of the DMCA, I have to say that I did a little spit-take when Rep. Lofgren raised the “red flag” subject and asked her first question of Meredith Rose, who replied that she is “not terribly familiar” with that part of the statute. This is not intended as a personal gotcha, but it is a rather serious matter when an organization purporting to represent the interests of “everyone who uses the internet” is unprepared to discuss one of the most problematic sections of the DMCA. In fact, the much broader question of what platform operators can know about the material on their servers, and what they should do about some of it, is the vexing challenge of the moment with regard to the effect social media are having on society. The knowledge question goes way beyond copyright.
The Dogeared Speech Argument
Historically, the internet industry’s shell game on the subject of what can and cannot be known is consistent with the kind of site management that has now proven to be the major catalyst in the dissolution of democracies worldwide. The same companies whose algorithms are allegedly so sophisticated that they can predict our choices before we make them, paradoxically claim an inability to parse data that ordinary, non-prescient humans can interpret. The manner in which the industry has exploited vagueness in the knowledge standard in the DMCA runs parallel to its history of shrugging “neutrality” when it comes to the moderation of harmful material like organized hate speech, conspiracy groups, and dangerous misinformation—a “neautrality” no longer acceptable to much of the public.
I cannot fathom how any reasonable person looks across the landscape at the ragged state of American democracy and, with a straight face, continues to exalt Web 2.0’s grand experiment in free speech as though it were not an appalling failure. The evidence is now clear, including testimony from a steady stream of defectors from the social media companies, that Facebook, Google, Twitter, Reddit, et al purposely designed their platforms to be digital crack. And it is no surprise that divisive politics and conspiracy garbage are potent ingredients in the drug cocktail that captures and retains the attention of millions.
Referring back to the 229 million users, it isn’t connecting to family or online banking or worshipping that is systematically destroying the American Republic; it’s the speech-a-palooza that organizations like Public Knowledge earnestly champion that has sown a motley patchwork of customized realities to the extent that we are now clinging to what remains of political common ground with our fingernails. Social media is a toxin coursing through the veins of the body politic with such deleterious effect that the most sober historians and political operatives are sincerely wondering if the Republic can survive another decade. It ain’t copyright enforcement that sends QAnon wackos to Congress.
Yet, to the tech-utopian, any effort to allow copyright owners to better protect their works online will unavoidably, and unacceptably, silence someone’s speech somewhere. In fairness, this is true. It is inevitable at times and must be remedied on a case-by-cases basis. Further, I see no reason why intentional abuse of DMCA to silence speech (e.g. criticism) cannot be more strongly proscribed through statutory reform if need be.
But citing “speech” as a generalized framework for debate is too broad and has little to show for itself as a social benefit to date. Aside from the fact that speech is silenced every minute online through many modes (e.g. bullying or platform moderation), there is no way that anyone can measure how much speech is currently silenced, or how much more or less would be silenced by improving the DMCA for rights holders. It’s counting grains of sand in the desert.
Ironically enough, Twitter announced over the weekend that it would delete tweets by anyone hoping the president dies from COVID-19. And while there are several reasons why this is sound policy for Twitter, it happens to be one of the few occasions when a platform would censor a prime example of protected speech. And, as one commenter rightly pointed out, Twitter has left intact volumes of missives hoping for the sexual assaults and deaths of women who speak out on various issues, including actual threats that transgress any claim to the speech right. So, we should dial down the speech rhetoric until it describes what the world actually looks like, not that Barlowian “home of mind” that never existed.
Into this long and repetitive debate, I think a fair market summary of the DMCA’s status quo is as follows: The major copyright owners enforce their rights through the use of some technological measures and the notice-and-takedown system, albeit with a ceaseless, dynamic, and expensive process that has little effect addressing the volume and rate of infringement. The small rightsholders barely enforce their rights at all through notice-and-takedown and generally give up trying. The user-generated platforms continue to profit substantially from third-party infringements against both small and large creators. And the 229-million of us Americans using the web comprise billions of transactions every day that have nothing to do with copyright.
© 2020 – 2021, David Newhoff. All rights reserved.