YouTube’s Tactics Re. Article 13 Are the Real Concern

When a media conglomerate is the subject of a news story, we expect the news organization owned by the parent company to acknowledge that relationship in its reporting.  So, when ABC News reports a story, positive or negative, about the Disney Corporation, it is standard practice that the reporter remind viewers that she is talking about her ultimate employer.  Unfortunately, the paradigm is very different when it comes to new media companies like YouTube, which can leverage the global reach of its platform (fueled by the capabilities of Google) to evangelize any message that serves its policy interests. 

In a new guest post on The Trichordist, Volker Rieck lays out the manner in which YouTube uses the power the platform to influence public debate (i.e. scare the bejesus out of people) when seeking a policy outcome favorable to the company.  After CEO Susan Wojcicki addressed the community of YouTube creators in a blog post and video warning them that Article 13 of the EU Digital Single Market Directive threatens their livelihoods, she got the response she was looking for.  As Rieck describes…

“Wild claims circulated that YouTube channel operators would already see their livelihoods threatened in 2019, that Article 13 was a censorship law, and so on. The platform helped the videos made in response to its own appeal to become highly visible and to reach wide audiences by displaying them on user home pages and by categorizing them as “trending.” Three of the top 5 videos in the YouTube trending charts at the beginning of November transported these dystopian visions.”

So, apropos my intro, even if the claims and assumptions made about Article 13 were accurate—and they are not—it should be more than a little frightening that a corporation with the scope of influence of YouTube can so effectively shape reality in regard to any matter of public policy.  To quote a recent post by Neil Turkewitz, responding to the EFF’s lopsided approach to Article 13, he summarizes the current draft of the directive in the following sober terms:

“… it requires large commercial platforms who are in the business of content distribution (defined in the legislation) to license the works that they are distributing, and to take steps to guard against the distribution of works for which it is not licensed. While the use of filters is not explicitly mentioned (unlike an earlier version of the Article), it is anticipated by most parties that most covered platforms would discharge their obligations to prevent distribution of infringing materials through the use of available technologieseither bespoke like ContentID, or off the shelf from a supplier like AudibleMagic. 

It is also important to keep in mind that, while it is timely for all creators (including YouTubers) to become better informed about Article 13 and to weigh in on the merits of the proposals, it will take at least a couple of years for all of the member states to implement the directive.  Thus, YouTube’s efforts to panic its entrepreneurial creators this month should be reason enough to question both its methods and its motives.  Is it really about those creators, or is it about a $160-billion company not wanting to pay license fees to other creators?

On the one hand, this type of scare-mongering is business as usual.  A corporation or industry doesn’t want the responsibility or cost of complying with a proposed law, and so tells consumers or employees (or both) that they will suffer if the policy in question were to be implemented.  But on the other hand, when a media platform like YouTube claims that a new policy will have “unintended consequences” like shutting down various channels, the company is uniquely empowered to spread its self-serving message and to manipulate user experiences in order to prioritize that message over other narratives.  As Rieck puts it …

“Ultimately, the way YouTube channels have been pressed into the service of the platform demonstrates just how urgent the need for measured political regulation of the platform has now become and how easy it is for the platform to exploit the ecosystem of private and semi-professional pseudo-journalism it hosts for its own ends.”

I would go so far as to at least entertain the possibility that YouTube could shut down or severely limit various channels as a false-flag tactic aimed at sowing further resentment against proposals like Article 13.  Perhaps the company would never engage in such an underhanded scheme, but really, what’s to stop them?  After all, they are already willing to engage in bad-faith PR designed to mislead YouTubers about the true nature of the EU directive.  In her open letter to YouTubers, CEO Susan Wojcicki, states:

“Article 13 as written threatens to shut down the ability of millions of people — from creators like you to everyday users — to upload content to platforms like YouTube. And it threatens to block users in the EU from viewing content that is already live on the channels of creators everywhere. This includes YouTube’s incredible video library of educational content, such as language classes, physics tutorials and other how-to’s.”

Really?  Even if we set aside the fact that Article 13 is a proposal to develop protocols that will take time and further negotiations to implement (if they happen at all), this statement implies that a very high percentage of YouTube channels rely substantially on unlicensed copyrighted material.  If that’s the case, why the should that status quo be preserved?  I’ve seen a lot of funny, informative, creative videos produced for YouTube that do not make any use of other creators’ protected works. 

For the YouTube creators who do use some portion of protected works, Wojcicki raises a subtle but important dichotomy when she addresses them as “a diverse community of creators who are building the next generation of media companies.”  Because that sounds to any reasonable person like a business enterprise.  And if these YouTubers are indeed engaged in business, then why shouldn’t they have the same responsibilities as every other type of professional creator to work within boundaries that respect copyrights?

It seems that when it suits the platform’s interests, we are meant to think of YouTubers as either hapless children (remember Lawrence Lessig?), who cannot be expected to know about copyright; or we are meant to think of them as the vanguard generation of new creative professionals, who should not be burdened by copyright.  Notice how, in either case, YouTube seeks to avoid its responsibility—as the only multi-billion-dollar media company in this narrative—by aligning its interests with the interchangeable interests of its users.

I recognize that underlying YouTube’s ability to frighten this class of creators about Article 13 is a litany of mistakes and abuses of existing models like Content ID or the DMCA notice and takedown process.  YouTube creators have had their own works targeted, either through error or willful misuse of these systems; and bad actors have targeted works they do not legally represent. 

While the anecdotes of bad-faith use of these systems are true, they feed a broader narrative which is not true:  that abuse of content-filtering systems is so rampant that the status quo is preferable to any attempt to make these systems work better for all stakeholders.  The status quo may be working for YouTube’s bottom line, but it certainly is not working for rights holders whose works are infringed at uncontrollable volume on the platform.   In fact, I have yet to see any data that even indicates that filtering or DMCA abuse is anywhere near the scope of infringement.  

Meanwhile, assuming Article 13 becomes law in the EU, YouTube creators have at least a couple of years to assess the extent to which their channels truly rely on the protected works of other authors.  Those who do not use other people’s works should be entirely unaffected; and if they are, their complaint may be properly directed at YouTube rather than Article 13.  Creators who use protected works legally—either by license or fair use—should play a particularly active (but informed) role in these developments.  

As professional creators, I suspect YouTube creator interests will increasingly share common cause with other types of creators.  In fact,  YouTube’s July launch of its Copyright Match system to address creator-to-creator disputes certainly suggests that YouTubers care about their own copyrights and should, therefore, take a proactive rather than a reactive look at the goals of Article 13.  After all, with regard to the way Wojcicki’s letter spawned a lot of misinformed outrage, it’s worth noting that just because this class of creators uses YouTube is no reason to let YouTube use them.


Source illustration by studiostoks

Implications of YouTube’s Copyright Match System

Last month, the European Union voted against key copyright enforcement provisions as part of its Digital Single Market initiative. Specifically, the proposal known as Article 13 called for the 28 member states to work with multiple stakeholders to develop and implement filtering technology that would, in theory, prevent unlicensed, copyrighted works from being uploaded onto user-content-supported platforms.

Article 13 was labeled by its opponents as a call for “censorship machines,” and as usual, the refrain was shouted from the rooftops that adopting any such filtering would lead to the end of free speech online and destroy “all that is good and pure” about the internet. To be sure, the tone and methodology of the campaign against these provisions reeked of Silicon Valley money and tactics, but whether you believe that or not, one talking point among critics stands out, which inspired this post: that the EU’s call for filtering would harm new creators.

Granted, “creator” in the rhetoric of groups like EFF includes literally anyone who posts anything online; but if we limit our data to that paragon of new creators—the successful YouTuber—YouTube itself made an interesting announcement almost concurrent with the defeat of Article 13. The platform launched its Copyright Match tool to protect YouTubers against unauthorized re-posting (freebooting) by other YouTubers.

Traditional rights holders have earned this moment of schadenfreude after being lectured to for years to get on the future bus and quit whining about their copyrights. They should follow the example of “new creators” working in “new models” that “bypass gatekeepers” and obviate the need for copyrights. Of course, it was inevitable that as YouTubers became entrepreneurs, they would feel entitled to the revenue from their labor (as they should) and that YouTube would have a vested interest in protecting the copyrights of its profitable video-makers—at least from other video-makers.

Using technical measures one might call “filters” (or dare we say “censorship machines”?), the new Copyright Match system works by identifying the first upload of a new video and associating that file with the presumptive owner of the work. Then, if and when matching videos are uploaded to YouTube, the original creator is notified and given the option to do nothing, to ask YouTube to remove the Match, or to get in touch with the uploader of the Match.

Hypocrisy Much?

If this sounds to the experienced observer like an intramural version of a take down/stay down provision achieved through technical measures, that’s because it is. And experienced observers probably remember that all previous proposals for take down/stay down, whether statutory or technological, have been labeled by industry-funded “activists” as internet-killing initiatives. In fact, during the May 2016 hearings about the DMCA, one of the dumb-but-effective talking points was that any mandate for such technical measures would “entrench” the market dominance of YouTube. (Yes, laughing through tears is the right response here.)

Traditional rights holders who have spent hours of their lives trying to identify and stop unlicensed uses of their works on market-dominating YouTube will quickly recognize the duplicity in launching Copyright Match. “Why should only their ‘chosen’ get access?” asks Grammy-winning composer Maria Schneider, one of many artists who will attest to the opaque and labyrinthine Content ID system rights holders theoretically use to track and control use of their works on YouTube.

What is not generally understood is that even getting access to Content ID varies wildly depending on a rights holder’s relative presence on YouTube and his/her interest in monetizing unauthorized uses vs. taking down unauthorized uses. Guess which one YouTube favors. Again, Schneider explains …

“YouTube always says that independents like me, to whom they’ve denied ContentID, can get access to the same tool via a third party. But what they don’t admit publicly is that this is only possible if we’re willing to monetize at least some of our work. So, independents like me, who want no part of monetization and simply want to block illegal uses of our works are just out of luck. And I might add that this technology we’re wanting to access has actually been around for twenty years—longer than YouTube has existed!

I spoke to an independent artist, who prefers to remain anonymous for fear of retaliation by YouTube’s Content ID group.  He does have Content ID and acknowledges that he’s probably a “thorn” in the side of YouTube’s Copyright Department team because he actively employs the system only to stop unlicensed uses of his compositions and sound recordings. And lest anyone think he’s responding to “new creators” making possible fair uses of his music, his most important Content ID-related takedowns have been aimed at global brands and Fortune 500 companies using his music for marketing purposes without a license.

It’s worth noting that the fact that this artist chooses to remain anonymous due to concern that YouTube would delete his Content ID account out of spite speaks volumes against every claim of tech-utopian bullshit Silicon Valley and its network of EFFing dissemblers have been slinging for years. As David Lowery explained in 2016, YouTube is a monopsony, a market with a single buyer, which means they get to make, break, and change the rules as often as they like, and the “sellers” can just eat it.

In this regard, it will be interesting to see if Content Match leads to disputes YouTuber-to-YouTuber and how the company will handle these, if it does. For instance, it is not clear at this point that YouTubers whose uploads are wrongly identified as “Matches” will have any kind of counter-notice remedy available to them.

Although the company’s video explaining the new system urges YouTubers requesting Match takedowns to “consider fair use,” it will be truly fascinating to see whether YouTube gives a damn about fair use among its own microcosm of creators. For sure, general users of the platforms have never been effectively dissuaded from uploading a wide range of files that could never qualify as fair uses.

None of this should be taken as a dig against YouTubers. To the contrary, I think many of them are brilliant artists and deserve to protect their interests and rights as much as any other creator. But this apparent initiative to protect their interests points to another aspect of YouTube’s ever-changing relationship to copyright enforcement and its relevance to the fight over Article 13.

Don’t Let the Internet Become YouTube?

Not that long ago, YouTube was consistently cited as the apotheosis of the utopian belief that the web will empower creators without gatekeepers—and without copyrights. But where this Copyright Match announcement becomes intertwined with the campaign against Article 13 is that some pundits against the proposal lately cite YouTube as a cautionary tale—asserting that the platform’s often-inconsistent application of copyright protection policies and technical measures is exemplary of what should not be done internet-wide pursuant to Article 13. The claim appears to be that because YouTube’s Content ID system has allegedly fostered rampant false strikes, resulting in unfair channel deletions, this generalized stifling is what the “entire internet would look like” if the EU moved forward with the kind of filtering proposed.

While there is certainly anecdotal evidence—some of it compelling—of Content ID error and abuse leading to improper strikes on YouTube, I have yet to see any evidence to support the claim that this problem is both rampant and increasing across the platform. As is often the case, activist groups or observers who have no skin in the game tend to exaggerate anecdotal evidence into statistical assumptions. Or as our anonymous artist puts it, “In 100% of the anti-Content ID statements I’ve ever heard over the years, 100% of the complainers had 0% vested interest in the system: they’re either Google-funded anti-copyright groups or individuals on some kind of personal crusade.”

In this creator’s direct experience with false identifications, he notes that “With about 100,000 Content ID claims in my dashboard since late 2012, I can say that YouTube has delivered me a mistaken ID about 10 times. The anti-copyright crowd will take that as evidence that the system needs to be dismantled or destroyed. I just don’t get it. The perfect shouldn’t be the enemy of the good.” And that’s for creators, who can avail themselves of Content ID, which does not include the creators identified by Maria Schneider who do not have access to any remedy via Content ID.

Competing Narratives

So, in context to the proposal that Article 13 filtering would “stifle new creators,” we have at least three narratives that compete and crisscross in ways that can be hard to track, if you’re not directly engaged with these systems. First, because Copyright Match is a response to YouTuber complaints about freebooting, it reveals that “new creators” don’t like copyright infringement when it happens to them (ergo copyright is not obsolete). Second, Copyright Match implies that filtering technology of this nature can be implemented without destroying a whole platform or stifling new creators. And third, Copyright Match is at least indicative of technology that could help non-YouTuber creators enforce their rights, but it will not be made available to them because it isn’t in YouTube’s interest to do so.

One thing the introduction of Copyright Match illustrates for sure is that creators are creators—whether traditional or new, they feel a sense of ownership in the products of their labor. And from this premise comes the foundation of copyright and systems for protection that will begin to make “new models” look a little more like “old models.” It’s what happens every time a business discovers it is codependent with talented people.

YouTube Bans Gun Videos. Raises Difficult Questions.

While Austin, TX was still searching for its serial bomber, various guests on CNN were of course speculating about the assailant’s level of expertise (perhaps even formal training) due to the technical sophistication of some of the explosive devices. Cynically, I thought, “Or he has YouTube.”

For years, the internet industry, led by the major platforms, has invoked free speech as a rationale for taking a hands-off approach to the content on their sites. This has included content that is intrinsically illegal, like copyright infringing material, or content that fosters criminal activity, like demos in computer hacking, terrorist propaganda, or drug-trafficking.

Beginning with the advertisers drawing a line in the sand in early 2016, economic pressure to clean up the platforms collided a year later with political pressure as both citizens and lawmakers finally decided that the major sites can be held at least somewhat responsible for the user-generated content (UGC) on their sites. And it turns out that under all this pressure, the companies seem to be discovering capabilities they previously claimed were untenable—though it remains to be seen whether they will make decisions that are both coherent and socially beneficial.

This past week, YouTube announced that it would remove and/or bar certain firearm-related videos, which will, no doubt, be welcome news for many Americans as the groundswell demanding better gun-control gains momentum due largely to the energy of the Parkland students. But my friend and colleague, Devlin Hartline at the Center for the Protection of Intellectual Property, was quick to notice a hypocrisy in YouTube’s decision—namely, that the company would be banning at least some videos that are neither illegal nor promoting illegal activity, even implicating two constitutional rights at the same time. Here’s his tweet:

Writing as someone who is hostile toward Second Amendment maximalism, I also cannot dispute Hartline’s observation, especially as a colleague who is likewise opposed to sites still monetizing outright illegal and highly-toxic content under a blanket claim of platform neutrality. But I want to look past the complexly-heated gun issue—and even the copyright issue—to consider how this recent decision by YouTube highlights the mercurial nature of the major sites, and how this frustrates efforts toward a coherent cyber policy.

The Chameleon Sites

The major UGC platforms—e.g. Google, YouTube, Facebook, Twitter, Reddit—are chameleons with at least three bold colors, “Community,” “Commons,” and “Brand”; and one diluted color, “Corporation.” For the sake of simplicity, I’ll define these terms as follows:

Community

As private entities comparable to physical retail environments, these sites are entitled to foster any environment they want—gun-free or panda free, if they choose—and users will either join and access the platform or they won’t. In this market-based context, the website is serving as a “Community” and owes no fealty to free speech or most other constitutional rights.

Commons

Although none of these sites is a public entity, there is arguably an extent to which a limited number of platforms have become the primary public fora for communication, news, exchange of ideas and information, and commerce. As a result, the notion that these sites represent a “Commons” has long been supported by user sentiment. But this sensibility has allowed the platform owners to appeal to the first amendment as a rationale for taking almost no responsibility for managing content, including some of the paid advertising.

Brand

Presumably, YouTube’s recent gun-video ban is an example of a PR decision in which the company has decided (correctly or not) that being on the wrong side of the current trend is bad for the “Brand.” This type of decision corresponds to the “Community” model but is anathema to the “Commons” model.

Corporation

I refer to this as a diluted color because too often, we have a habit of ignoring it, of talking about “our internet” as though that notion is coextensive to the business interests of Google et al. Meanwhile, many of the invisible decisions—like Facebook authorizing a developer who ends up abusing our data—are profit-based, “Corporation” decisions that are presented to users as supporting either the “Community” or the “Commons.” (Think “fun” personality quiz you share with friends but which is actually mining your data in order to manipulate your political views.)

Rethinking Liability & Responsibility

The two statutory liability protections written in the late 1990s—Section 230 of the CDA, and Section 512 of the DMCA—were not intended to foster a “Commons” model per se. They were simply meant to shield as-yet undefined platforms from liability for the unlawful conduct of as-yet undefined types of users. But because certain major platforms today have qualities akin to a “Commons,” those liability protections are easily conflated with the speech rights of users, which galvanizes the liability shields in a manner that allows these sites to have it both ways—to monetize everything as private entities while appealing to the illusion that they are public entities.

Further exacerbating the semantic confusion, “the internet” is usually described as a single entity from a policy perspective. This has allowed the biggest, wealthiest, and most technologically capable providers to entrench a non-liability paradigm by rhetorically citing the interests of independent, small, and start-up enterprises. This is an unreasonable analysis when the total number of major platforms—the ones that could arguably be considered a “Commons”—is just a handful of entities in contrast to the roughly one-billion sites online.

Historically, those of us in the copyright fight have watched the big platforms color-shift between the rhetoric of “Community” and the rhetoric of “Commons,” depending on which identity best serves their interests in the moment. So, if new federally-mandated guidelines are called for in the wake of all the Facebook fallout—and this seems quite possible—perhaps one starting point is to address the “Community/Commons” dichotomy. Because it seems to me that new policies need not address the entire internet, when perhaps fewer than a dozen sites can reasonably be described as semi-public.

The Challenge of Semi-Public Spaces

Hartline is a law and policy expert who knows full well that YouTube doesn’t actually have to answer to the First or Second Amendment, but his response to the new gun-video ban is reflective of how most users tend to feel if otherwise legal material they post online is removed. If I post a link to this blog on Facebook, and it’s removed, I can’t sue the company for speech infringement, but that doesn’t mean my speech would not be infringed solely because Facebook owns so much of the market of potential readers.

At the same time, I would not necessarily know if the removal was done to serve “Community,” “Brand,” or “Corporation,” but for sure, the removal would belie any pretense that the platform is a “Commons.” Conversely, the more a platform like Facebook chips away at the illusion that it is a “Commons” (i.e. controls more content), the more people are likely to abandon the site in a massive self-fulfilling prophecy, taking “Community,” “Brand,” and “Corporation” down into the oubliette with MySpace.

We have to acknowledge that there truly is no historic precedent for so profound a merger of public and private interests as these particular platforms. There is some case law precedent in the analog world in which private property functions as public space and, thus, courts have limited an owners’ ability to prohibit speech. The shopping mall naturally comes to mind, but neither a mall nor any other semi-public, physical space was designed with the purpose of hosting expression, let alone the expression of millions of people in the same venue.

Addressing the “Commons/Community” dichotomy could help contextualize the intent of the existing statutory framework on site liability, which over-broadly lumps “the web” into one big policy bucket. Gaps in the legislative language have too-often allowed the major sites with the most public influence to behave as chameleons in both the courts and the court of public opinion. Thus, the major platforms have a history of rejecting and criticizing even statutorily-mandated systems to mitigate abuse—all in the name of protecting user speech, which they are not in fact obligated to protect.

So, regardless of individual views about guns and gun control, Hartline is correct to observe that this latest decision by YouTube reveals that our social and legal relationship to the major platforms remains bipolar at best. Hence, before any reasonable policy changes can emerge, it seems like the next step is to define the terms that more-accurately describe the web we have instead of the web we expected some twenty years ago.


ADDENDUM:  At the moment of publication, a colleague sent a link to this article by Eriq Gardner at Hollywood Reporter.  Court holds that YouTube is not a public forum, which it isn’t.  But that doesn’t wholly satisfy the challenge.