Implications of YouTube’s Copyright Match System

Last month, the European Union voted against key copyright enforcement provisions as part of its Digital Single Market initiative. Specifically, the proposal known as Article 13 called for the 28 member states to work with multiple stakeholders to develop and implement filtering technology that would, in theory, prevent unlicensed, copyrighted works from being uploaded onto user-content-supported platforms.

Article 13 was labeled by its opponents as a call for “censorship machines,” and as usual, the refrain was shouted from the rooftops that adopting any such filtering would lead to the end of free speech online and destroy “all that is good and pure” about the internet. To be sure, the tone and methodology of the campaign against these provisions reeked of Silicon Valley money and tactics, but whether you believe that or not, one talking point among critics stands out, which inspired this post: that the EU’s call for filtering would harm new creators.

Granted, “creator” in the rhetoric of groups like EFF includes literally anyone who posts anything online; but if we limit our data to that paragon of new creators—the successful YouTuber—YouTube itself made an interesting announcement almost concurrent with the defeat of Article 13. The platform launched its Copyright Match tool to protect YouTubers against unauthorized re-posting (freebooting) by other YouTubers.

Traditional rights holders have earned this moment of schadenfreude after being lectured to for years to get on the future bus and quit whining about their copyrights. They should follow the example of “new creators” working in “new models” that “bypass gatekeepers” and obviate the need for copyrights. Of course, it was inevitable that as YouTubers became entrepreneurs, they would feel entitled to the revenue from their labor (as they should) and that YouTube would have a vested interest in protecting the copyrights of its profitable video-makers—at least from other video-makers.

Using technical measures one might call “filters” (or dare we say “censorship machines”?), the new Copyright Match system works by identifying the first upload of a new video and associating that file with the presumptive owner of the work. Then, if and when matching videos are uploaded to YouTube, the original creator is notified and given the option to do nothing, to ask YouTube to remove the Match, or to get in touch with the uploader of the Match.

Hypocrisy Much?

If this sounds to the experienced observer like an intramural version of a take down/stay down provision achieved through technical measures, that’s because it is. And experienced observers probably remember that all previous proposals for take down/stay down, whether statutory or technological, have been labeled by industry-funded “activists” as internet-killing initiatives. In fact, during the May 2016 hearings about the DMCA, one of the dumb-but-effective talking points was that any mandate for such technical measures would “entrench” the market dominance of YouTube. (Yes, laughing through tears is the right response here.)

Traditional rights holders who have spent hours of their lives trying to identify and stop unlicensed uses of their works on market-dominating YouTube will quickly recognize the duplicity in launching Copyright Match. “Why should only their ‘chosen’ get access?” asks Grammy-winning composer Maria Schneider, one of many artists who will attest to the opaque and labyrinthine Content ID system rights holders theoretically use to track and control use of their works on YouTube.

What is not generally understood is that even getting access to Content ID varies wildly depending on a rights holder’s relative presence on YouTube and his/her interest in monetizing unauthorized uses vs. taking down unauthorized uses. Guess which one YouTube favors. Again, Schneider explains …

“YouTube always says that independents like me, to whom they’ve denied ContentID, can get access to the same tool via a third party. But what they don’t admit publicly is that this is only possible if we’re willing to monetize at least some of our work. So, independents like me, who want no part of monetization and simply want to block illegal uses of our works are just out of luck. And I might add that this technology we’re wanting to access has actually been around for twenty years—longer than YouTube has existed!

I spoke to an independent artist, who prefers to remain anonymous for fear of retaliation by YouTube’s Content ID group.  He does have Content ID and acknowledges that he’s probably a “thorn” in the side of YouTube’s Copyright Department team because he actively employs the system only to stop unlicensed uses of his compositions and sound recordings. And lest anyone think he’s responding to “new creators” making possible fair uses of his music, his most important Content ID-related takedowns have been aimed at global brands and Fortune 500 companies using his music for marketing purposes without a license.

It’s worth noting that the fact that this artist chooses to remain anonymous due to concern that YouTube would delete his Content ID account out of spite speaks volumes against every claim of tech-utopian bullshit Silicon Valley and its network of EFFing dissemblers have been slinging for years. As David Lowery explained in 2016, YouTube is a monopsony, a market with a single buyer, which means they get to make, break, and change the rules as often as they like, and the “sellers” can just eat it.

In this regard, it will be interesting to see if Content Match leads to disputes YouTuber-to-YouTuber and how the company will handle these, if it does. For instance, it is not clear at this point that YouTubers whose uploads are wrongly identified as “Matches” will have any kind of counter-notice remedy available to them.

Although the company’s video explaining the new system urges YouTubers requesting Match takedowns to “consider fair use,” it will be truly fascinating to see whether YouTube gives a damn about fair use among its own microcosm of creators. For sure, general users of the platforms have never been effectively dissuaded from uploading a wide range of files that could never qualify as fair uses.

None of this should be taken as a dig against YouTubers. To the contrary, I think many of them are brilliant artists and deserve to protect their interests and rights as much as any other creator. But this apparent initiative to protect their interests points to another aspect of YouTube’s ever-changing relationship to copyright enforcement and its relevance to the fight over Article 13.

Don’t Let the Internet Become YouTube?

Not that long ago, YouTube was consistently cited as the apotheosis of the utopian belief that the web will empower creators without gatekeepers—and without copyrights. But where this Copyright Match announcement becomes intertwined with the campaign against Article 13 is that some pundits against the proposal lately cite YouTube as a cautionary tale—asserting that the platform’s often-inconsistent application of copyright protection policies and technical measures is exemplary of what should not be done internet-wide pursuant to Article 13. The claim appears to be that because YouTube’s Content ID system has allegedly fostered rampant false strikes, resulting in unfair channel deletions, this generalized stifling is what the “entire internet would look like” if the EU moved forward with the kind of filtering proposed.

While there is certainly anecdotal evidence—some of it compelling—of Content ID error and abuse leading to improper strikes on YouTube, I have yet to see any evidence to support the claim that this problem is both rampant and increasing across the platform. As is often the case, activist groups or observers who have no skin in the game tend to exaggerate anecdotal evidence into statistical assumptions. Or as our anonymous artist puts it, “In 100% of the anti-Content ID statements I’ve ever heard over the years, 100% of the complainers had 0% vested interest in the system: they’re either Google-funded anti-copyright groups or individuals on some kind of personal crusade.”

In this creator’s direct experience with false identifications, he notes that “With about 100,000 Content ID claims in my dashboard since late 2012, I can say that YouTube has delivered me a mistaken ID about 10 times. The anti-copyright crowd will take that as evidence that the system needs to be dismantled or destroyed. I just don’t get it. The perfect shouldn’t be the enemy of the good.” And that’s for creators, who can avail themselves of Content ID, which does not include the creators identified by Maria Schneider who do not have access to any remedy via Content ID.

Competing Narratives

So, in context to the proposal that Article 13 filtering would “stifle new creators,” we have at least three narratives that compete and crisscross in ways that can be hard to track, if you’re not directly engaged with these systems. First, because Copyright Match is a response to YouTuber complaints about freebooting, it reveals that “new creators” don’t like copyright infringement when it happens to them (ergo copyright is not obsolete). Second, Copyright Match implies that filtering technology of this nature can be implemented without destroying a whole platform or stifling new creators. And third, Copyright Match is at least indicative of technology that could help non-YouTuber creators enforce their rights, but it will not be made available to them because it isn’t in YouTube’s interest to do so.

One thing the introduction of Copyright Match illustrates for sure is that creators are creators—whether traditional or new, they feel a sense of ownership in the products of their labor. And from this premise comes the foundation of copyright and systems for protection that will begin to make “new models” look a little more like “old models.” It’s what happens every time a business discovers it is codependent with talented people.

EU Copyright Proposal Article 13 Set to Destroy the Internet (Again)

As mentioned in my previous post, Article 13 of the EU Directive on Copyright in the Digital Single Market is the latest proposal that will “destroy the internet as we know it,” if the statute is ratified in its present form. The #copyright feed on Twitter seems dominated by messages proclaiming the existential toxicity of Article 13, and, as usual, there are a lot of articles agreeing with one another that this proposal is really bad—all of them long on synonyms for bad, but short on substance as to why bad things will actually come to pass. And the reason for this is that Article 13 does not outline any specific practices but rather proposes to adopt certain practices. If that sounds like a distinction without a difference, it isn’t.

The broad goal of the EU Directive is to create a Digital Single Market (DSM), something that consumers, digital rights activists, and even many rightholders have been advocating for years. Harmonizing the disparate copyright laws (as they relate to internet distribution) of the 28 member countries should facilitate better access for users and, in principle, foster more robust trade in digital goods throughout the continent.

But the European Commission simultaneously recognizes various threats posed by the internet industry to the authors of works—especially from major platforms that host vast amounts of user-uploaded, copyrighted material.  In the European Commission, these are called Online Content Sharing Service Providers (OCSSP), a category that excludes non-commercial sites like encyclopedias or sites where “content is uploaded with the authorisation of all concerned rigthholders, such as education or scientific repositories.”

Presently, the Directive’s Article 13 mandates that Member States work with major service providers, user representatives, and rightholders to develop technical measures designed to filter content in order to prevent or mitigate the uploading of infringing material. Here’s the language from the current draft:

“Member States shall facilitate, where appropriate, the cooperation between the online content sharing service providers, users, and rightholders through stakeholder dialogues to define best practices for the implementation of the measures referred to in paragraph 1 in a manner that is proportionate and efficient, taking into account, among others, the nature of the services, the availability of technologies and their effectiveness in light of technological developments.”

This is what will supposedly destroy the internet as we know it.  A provision that Member States work with stakeholders to develop technical measures to filter unlicensed content from various platforms.  A process that, if it happens at all, will take several years of negotiating (and bickering) to implement.  I feel compelled to interject that when the DMCA was passed in the U.S. in 1998, it also contained a mandate that rightholders and service providers collaborate to develop technical measures in order to filter for infringing content. In fact, the service providers testified to the availability of such technical measures as part of their rationale for lobbying for the safe harbors in DMCA in the first place.

If America’s past is Europe’s prologue, the major service providers—with ample help from anti-copyright ideologues—will fight the implementation of such measures at every phase, so we’re at least 10-15 years from “destroying the internet we know.”  I find this funny because the “internet we know” (using YouTube as a reference) isn’t 15 years old yet, and I’m not sure why the internet of 2031 should be required to resemble the internet of today in any context whatsoever.

Although the EU Directive is not specific about what technical measures should be developed and implemented, the usual chorus of critics hear the death knell of the internet in the mere suggestion that such technical measures should even be considered. Thus, the main message they’re selling—the one they always sell—is that no technical measure could ever be implemented without fostering censorship of protected speech. Hence, Pirate Party Member of the European Commission Julia Reda’s labeling these provisions “censorship machines.” Then, the anti-copyright voices in academia and “digital rights” groups jump on board with scary-sounding declarations like this one:

Algorithms Can’t Assess Fair Use!

It’s true. They can’t.  And the day they can, I’m going into the bunker because this would indicate the machines have woken up and are about to kill us. Of course, most human users who upload copyrighted works aren’t very good at assessing fair use either—or more to the point, most human users don’t bother thinking about what they’re uploading, period. It is simply assumed at this point that every user is free to upload whatever he wants without considering whether he has any right to make a work available online.

Just because digital activists and copyright haters paint a picture of an internet replete  with fair uses, that doesn’t make it true.  In fact, in my anecdotal experience with friends—including artists who don’t want to infringe—almost nobody has taken the time to understand fair use. So, are social media platforms more richly populated by fair uses or infringing uses? I don’t know. But neither do any of the people currently overstating assumptions about fair use in order to scare users about the provisions in Article 13.

Moreover, as alluded to in my last post, if content filtering systems are too hypersensitive, this will adversely disrupt the use of licensed works. For instance, I pay for the stock photos I use on this blog, but if WordPress deploys a filter that is too robust and rejects every image, that’s bad for me and the rightholders of those images. This is a tiny example as to why Article 13 requires stakeholders to develop technical measures through collaboration.

And on that point, why don’t the digital activists ever seem to want to collaborate on such initiatives rather than invoke Revelations at the mere prospect of having the conversation? Because a) they fundamentally hate copyright and have no intention of finding compromise; or b) because they really are in Silicon Valley’s pocket and seek policies that serve the interests of Google et al.

It is important to keep in mind that almost no proposal—from voluntary to statutory—is ever endorsed by these parties if it implies even a hint of platform responsibility for user-uploaded content. This remains true despite the staggering evidence that our 20-year policy of leaving platforms to their own “merits” resulted in the Facebook/Russia/Cambridge-Analytica scandal. These events have led many citizens to reconsider the need to preserve “the internet as we know it” in favor of adapting to an internet that better serves society. To achieve this, we will have to accept that, in fact, there are laws applicable in physical space which are not rendered obsolete by interacting in cyberspace. Maybe if we describe this approach as disrupting the internet, the digital activists will get it.

Don’t Mourn for the Memes Just Yet

Over the weekend, a photograph taken by Jesco Denzel went mega-viral. Ultra-viral? Really really viral? Whatever. It killed. You must have seen it. It depicts leaders of the G7 nations, headed by German Chancellor Angela Merkel, in a composition that seems to suggest the adults of the world are schooling a petulant-looking Donald Trump. But I don’t mention the photograph to comment on the President or about his administration’s posture regarding international trade. I mention it because by now, the image has been “memed” dozens or hundreds of times; and although any number of these derivative images may be amusing, I have to question the extent to which they are particularly important.

Consider what I assume to be a favorite version among Trump critics: the one that shows the President seated in a high chair with a bowl of spaghetti overturned on his head. It’s funny. But what it actually says is also redundant to the way I think many people read the original Denzel photograph in the first place. What has the meme really added? A fleeting moment of comic relief soon to be forgotten amid the millions more to come? Or is it truly a substantive work of political satire that will have lasting, salient effect?

In context to this post, the meme version is not necessarily a fair use as a parody, if it were ever to be the subject of a copyright infringement claim. Without doing a whole fair use analysis, the fact that the spaghetti version merely emphasizes what the original says (at least to Trump’s critics) weighs against a finding of fair use in which the meme-maker parodied the work rather than merely used Denzel’s photograph to lampoon the President. But within that analysis lies a hint about the social and cultural value of memes in general.

Because the meme in this example adds almost nothing while potentially diluting the value of the original—both for the author and the viewer—we should not completely ignore what we lose in the digital age, when an important image is no longer allowed to simply be what it is for even a few hours before every prankster with Photoshop has to draw metaphorical mustaches on it. Though funny, the spaghetti variation of Denzel’s photograph is glib in contrast to the provocative quality of the original, which my friend, the photographer Doug Menuez, predicts may prove to be one of the truly important photographs in history.*

If it seems that I exaggerate the worthlessness of memes it is only to propose some counterbalance to the more general attitude that the social media meme is a medium of great value. And the reason I stress a more balanced view is that several stories have surfaced recently declaring that if the current EU plan to harmonize copyright law for the digital age passes as written, memes will be banned from the internet. So, aside from the fact that, of course, memes will not be banned, I’m not convinced society would lose anything if memes were either fewer in number or less-infringing in nature.

Specifically, this “save the meme” campaign is one of several lines of attack on the proposals in Article 13 of the European Commission’s strategy to create a Digital Single Market. This section outlines a mandate for platforms that host user-generated content to implement technological filters that identify and help remove infringing material from their platforms. Pirate Party Member of the European Parliament Julia Reda has labeled these technical measures “censorship machines” (of course she has), and this rhetoric has been echoed by the usual suspects EFF, Techdirt, et al as the latest major threat to the “internet as we know it.”

On that subject, I’d like to remind readers that the “Russian hacking” of American democracy via Facebook ads was a campaign based largely on memes. If you watched the hearings on Capitol Hill, memes are what Members of Congress presented to Zuckerberg as evidence of Russian-sponsored messages designed to foment and aggravate divisiveness among American citizens. So, not only would I caution against too ardently “saving the meme,” it seems increasingly clear that the more generalized agenda to save “the internet as we know it” cannot be taken too literally.

In a subsequent post, I’ll try to dig into Article 13 in more detail, but the general complaint being marketed as inevitable meme extinction assumes that any technical measure employed to filter the uploading of unlicensed content will not be able to detect fair uses. Consequently speech—potentially speech of great parodic significance—will be removed from the internet.

It’s a ballsy complaint coming from the same crowd that insists rightsholders must “consider fair use” before sending a DMCA takedown because they seem to think the user of a work should not have to “consider fair use” before uploading. I say this because these same critics assume, or at least promote the idea, that most meme uses of protected images are fair uses. In all likelihood, however, this is not the case. Most memes I see would not stand up to fair use analysis, so what the critics are really saying is that memes are just too important to lose, even if they’re infringing.

So, I would first reiterate that a very large volume of memes are less culturally valuable to society than they are financially valuable to the platforms. Second, these critics overstate the assumption that everyone who alters a photo to make a meme is engaged in a fair use—be it funny, poignant, cruel, or just Russian agents having fun. Third, and perhaps most importantly, if the so-called “censorship machines” were as hyperactive as the critics claim, these measures would invariably harm the interests of rights holders, advertisers, and any other party who benefits from licensed use of works on social platforms.

This suggests that perhaps nobody envisions “censorship machines.” In fact, if experience tells us anything in this regard, it’s that the anti-copyright, pro-Google “activists” start saying “censorship” and “break the internet” at the mere suggestion that any proposal should change the status quo. Hence the specifics are either still in development or are being purposely obfuscated by the critics.

As I say, I’ll do my best to get into the specifics related to Article 13, but in the meantime, I’ll summarize what I said to Washington Post tech reporter Caitlin Dewey when she predicted the death of memes in 2012: infringing protected works is not actually necessary to produce memes; authors of works produce all the time without infringing; it’s called being creative.


*I do not claim to know how Mr. Denzel feels about any of the memes of his photograph.