Platforms Wrestle With the Difficult After Years of Ignoring the Easy

A new, in-depth post by Mike Masnick at Techdirt correctly describes many of the challenges inherent to platform moderation of content. It was enough of a departure from his usual “anything goes” stance that he wrote a preamble acknowledging that he was likely to piss off a few readers. And it is, admittedly, a little bit fun to watch some of the web cheerleaders stumble these days as they try to walk back the utopian view that all content online is fundamentally free speech and that removal of anything is inherently censorship.

Now that the public conversation is less comfortable with “free speech” as a universal answer—beginning with Facebook taking money for political ads made by Russian agents—Masnick et al have little choice other than to engage in a more nuanced dialogue that at least begins with the premise that some platform responsibility is worth considering. His post highlights a few possible solutions to “bad” content, including his own proposal; and while I think he correctly describes the complex nature of content moderation by administrators, I’m not sure any of the solutions cited address the real problem. His highlights include the following:

Yari Rosenberg recommends counterprogramming, which is essentially responding to misinformation with facts at the point of user interaction. Tim Lee advocates down-ranking less credible sources that appear to be news. David French proposes that the platforms only remove libel and slander because these don’t require new legal definitions. And Masnick proposes that, for instance, Facebook abdicate its centralized control over filtering or adjusting its algorithm and instead cede that power to users to set parameters for what they want to see.

“And, yes, that might mean some awful people create filter bubbles of nonsense and hatred,” Masnick writes, “but average people could avoid those cesspools while at the same time those tasked with monitoring those kinds of idiots and their behavior could still do so.” To me, this statement implies that Masnick’s “protocols” solution is largely cosmetic, that it may result in us “average people” not seeing as much garbage, but it in no way alters the underlying model of “surveillance capitalism” and merely papers over the social disease whereby garbage continues to gain undue support and have undue influence in the mainstream. (This was discussed in my last post about the paper by Alice E. Marwick on why we share fake news.)

When YouTube and Facebook shut down the accounts of conspiracy nut Alex Jones’s Infowars last week, doubtless some cheered, others cried foul, and others warned that attempting to silence even the outrageous wack-jobs can turn them into martyrs and galvanize their cult-like followers into an even larger mob. This prediction is almost certainly correct and, thus, points to the real question I have, which is not whether Facebook should keep Jones off my feed to avoid offending me, but why so much outright garbage information is currently playing such an outsized role in the social and political narrative of the United States?

I can see how some of the solutions Masnick mentions, including his own, might diminish some of the low-level sharing of junk news by “average” thoughtful people, but none of these proposals tackles the big social phenomenon itself — that the internet has been the catalyst for elevating toxic misinformation to an unprecedented level of tangible influence. The crazies who used to be conveniently segregated by geography (the proverbial idiots in every village) can now coalesce in cyberspace, finding strength in numbers, reinforcing their “deep stories,” (to use Alice Marwick’s term), and taking tangible action in the streets or at the polls.

So, while the tech pundits and the internet companies look for (or pay lip-service to looking for) technological responses to these social ills, the underlying reasons why we are suddenly reacting to “bad” content and putting pressure on the major platforms may not actually be addressable—either by the companies simply removing content or by public policy that attempts to parse hate speech and other highly-subjective concepts.

Masnick is not wrong that the task of editing speech by the platforms is extremely difficult, which is presumably the main reason he advocates putting that control in the hands of users. As I say, I’m ambivalent about this approach because I think the end result will be the same—increased credibility for outright crazy shit via one portal or another. If there is an antidote to that problem, I strongly suspect it is not technological but human. But, at least even the tech-utopians now have to acknowledge that treating all online content as sacred has had some very negative consequences, so perhaps we can now have a different discussion about content that would not be protected speech in any context.

For those of us who have advocated platform responsibility for quite some time, it is amusing, if not frustrating, to watch the industry wrestle with the truly difficult issue of moderation after years of refusing to compromise on the comparatively simpler issue of removing material that is patently illegal. For instance, weeding out material that infringes copyright, or which a court has held to be libelous or otherwise harmful to a claimant, is much easier than deciding when it’s okay to remove or demote “bad” speech. Yet the major platforms, along with considerable help from opinion-makers like Masnick, have historically responded to the proposed removal of unprotected or illegal content as a prelude to “rampant censorship” and the destruction of all that is beautiful about the internet.

This recent shift in posture implies two things in my view: the first is that the platforms can indeed be more cooperative in responding to illegal content without damaging the benefits of the internet; and the second is that those benefits have never been all they’re cracked up to be. Admitting to the latter would go a long way toward reframing a more rational discussion about the former.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)