Moderation in all things. Except perhaps social media.

Is it just me, or have the digital rights folks lately shifted the narrative on the subject of platform responsibility and content moderation?  Where once they could be counted on to repeat the commandment Thou Shalt Not Touch Online Content, I perceive a more nuanced (sounding) agenda now recommending Best Practices for Touching Online Content If You Really Must.

For instance, in a blog post of April 29, Jillian C. York and Corynne McSherry of the Electronic Frontier Foundation declared that content moderation on social media platforms is broken, outlining four key reasons why various attempts to-date have been fraught with problems—all descriptions I would not quarrel with per se.  Neither would I disagree with the following statement they post beneath the headline No More Magical Thinking:

We shouldn’t look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible. As with any system of censorship, mistakes are inevitable.  As companies increasingly use artificial intelligence to flag or moderate content … we’re inevitably going to see more errors. 

It is hard to refute the premise that a platform will have a very hard time moderating content without error.  But I would also contend that those who believe that satisfactory moderation guidelines can be developed at all (moderation in moderation, as Wilde would say) are still engaged in magical thinking.  Let’s face it.  Even before we finish the thought, the process has already stalled.  Error according to who?  

I personally could not care less if Facebook tosses Alex Jones, Louis Farrakhan, and Milo Yiannapolis into the digital oubliette, but that decision was considered an egregious error by many—not only fans of one member of that bizarre triad, but speech rights advocates and those who recommend keeping hate-mongers and other extremists in plain sight under the theory that it is better to know thy enemy.  

Regardless, while it is very easy for many of us to say good riddance to certain types of high-profile provocateurs, it is a far more complex challenge to define broadly-applicable terms recommending exactly why an Alex Jones should be removed.  And Jones is relatively simple compared to more subtle examples of what might still be deemed toxic speech.  Even perhaps Milo. But picking up on EFF’s reference to “Silicon Valley or anyone else,” it seems that some parties believe they can be the “international speech police” as long as they really really care about speech.  

The Global Digital Policy Incubator at Stanford, along with the group ARTICLE 19, recently released a broad report describing a working meeting held in February to develop what the group calls Social Media Councils (SMCs).  Responding to concerns that the platform companies cannot be trusted to self-regulate, and government regulators are likely to overreact and be tainted by political agenda, SCMs would operate independently—at either at a regional, national, or international level—to draft universal guidelines for moderation and, potentially, function as an appeals body to adjudicate alleged errors in content removal.

While the report does acknowledge the scale, complexity, and cost of implementing SCMs, I think it woefully underestimates the ambition of the whole concept, which I would compare to establishing a UN for the internet.  While there are several moving parts to the proposal begging for comment, I will presume to jump to the conclusion that the market itself, for better or worse, will change long before SCMs can ever be implemented.  But even if SCMs could be created and develop useful guidelines, this would do nothing to mitigate the underlying challenge that social media helps to foster completely false narratives perpetuated by perfectly reasonable people.

Social media’s harm to liberal democracies does not end with the excision of the most extreme and obvious purveyors of hate speech and incitements to violence. The harm to done by social media and other platforms is much more subtle than that, and thoughtful, decent people are often the conduits–if for no other reason than the fact that every exchange, no matter how innocuous, teaches the algorithm how to market to someone. If that’s marketing a product, no big deal; if it’s marketing a false narrative about public policy, it’s a very big deal.

As I have repeated multiple times, what concerned me most about the anti-SOPA campaign of 2011/12 was not the copyright issue, but the staggering effectiveness of hyperbole and misinformation. “This could be any issue,” was my first thought at the time, and indeed, we have now seen some of the worst effects that data-driven misinformation has had democratic countries around the world. Yet, it seems as though the EFF, Article 19, et al, in their recent efforts to recommend guidelines for moderation, are still clinging to a general belief that social media has largely been a positive force for democratic values. Rebecca MacKinnon, Director of Ranking Digital Rights, states

“While the internet and related technologies have indeed helped people circumvent traditional barriers to holding governments and powerful corporations accountable, they did not shatter as many walls as democracy and human rights activists once hoped and expected. Instead, daily headlines report how they make us vulnerable to mass surveillance, unaccountable censorship, disinformation, and viral hate speech that incites violence. Entirely new channels have been created for abusing power, in ways that we are still struggling to understand. In many places and on many issues, exercising and defending human rights has grown more difficult. “

I suspect that overall picture will not be improved simply by removing the Alex Joneses and terrorist recruiters from mainstream sites. Or for that matter by trying to mitigate erroneous removals of protected speech given that we’ve had about fifteen years of un-moderated social media, and it hasn’t done much good. Speaking personally, I have all but bailed from interacting on Facebook because I do not want to feed the machine any more data, though I do not imagine that tens of millions of people will suddenly feel the same. Short of that, it seems the only meaningful moderation has to come from users themselves. If social media platforms are going to remain filters of information and debate, then at least recognizing that they are opaque, manipulated, advertising and data-harvesting machines should foster a healthy skepticism. And that may prove more important than all the ambitious best-practices proposals any group can devise.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)