During Tuesday’s Joint Senate Committee hearing, as Mark Zuckerberg kept promising to take better control over content on Facebook, Senator Ted Cruz (R-TX) asked the CEO point blank if the site is a neutral platform or a publisher. Cruz acknowledged the company’s right to act as a publisher but also alluded to the fact that its liability protection under Section 230 of the Communications Decency Act is based on the fact that, as a host of user-generated content, Facebook is presumed to be a neutral platform.
It was a little surprising when Zuckerberg said he’s not familiar with the statute that universally shields his company from most forms of liability, but Section 230 of the CDA is just that. As explained in an older post, this statute broadly immunizes websites that host user-generated content against civil and criminal liabilities that may arise from users’ online conduct. It is in fact so universally applied as a defense that on Wednesday, FOSTA (Fight Online Sex Trafficking Act) was passed in order to clarify that Section 230 was not meant to shield site owners from liabilities stemming from sex-trafficking minors.
But the real bee in Cruz’s bonnet provoking his question is his general belief that social media platforms censor “conservative” content while favoring “liberal” content. I have no idea whether there’s any data to support that allegation, but I doubt the senator has the data himself, or he probably would have alluded to more than anecdotal evidence during the hearing.
Regardless, Cruz’s line of inquiry, without necessarily meaning to, gets to the heart of just how complicated Facebook’s current challenges may be—that is if they really intend to address them. It’s hard enough to define “liberal” and “conservative” these days, but that seems like child’s play compared to expecting Facebook to draw lines for appropriate censorship that a majority of users will agree are the right lines, independent of our political opinions.
I’m inclined to believe Zuckerberg when he says he wants Facebook to be an engine of social good, but for most organizations, striving for that goal usually requires making a decision about what is and is not good and then earning the support of those who agree and accepting the opprobrium of those who do not. This is a fundamental problem with being a so-called neutral platform for social good: there’s nothing neutral about our diverse opinions about goodness. Plus, it’s the nature of politics to cross lines of decorum and truth; and social media is a very cost-effective means of provoking emotional responses to messaging on just about any topic.
So, it’s easy for senators to allude rhetorically to a consensus about where the lines are for internal, corporate censorship, but I am skeptical that such a consensus actually exists for us Americans, let alone Facebook’s majority non-American users. And the hotter the issue, the more jagged the lines are going to be. Plus, social media algorithms respond to popularity; so an issue like guns, for instance, may naturally trend in opposition to a guy like Cruz if in fact most Americans favor regulation.
On that topic, if a friend takes and posts a photo of a billboard in Louisville that says “Kill the NRA,” will that be that someone’s (or some AI’s) definition of inciting violence? Probably. According to USA Today, when that billboard appeared in February, the NRA’s Facebook page posted a photo of it, saying the billboard was, “a wakeup call. They’re coming after us.”
From a First Amendment standpoint, neither the photos of the billboard nor the NRA’s response warrants censorship, and perhaps this would be true of Facebook policy as well. Or Facebook could make a decision that both the billboard photo and the NRA response cross some line in the violence category, although it seems very hard to completely remove the rhetoric of violence when the issue itself is weapons.
Throughout the hearing, Zuckerberg consistently reiterated plans to eventually deploy AI to help weed out toxic content; and although this may address the manpower challenge of moderation, it doesn’t help answer the more nuanced problem that we as a society do not have a common definition of what content would qualify as toxic. Does this mean we would cede that ethical calculus to the AI, which is eerie on a whole other level?
Predictably, the EFF published a post arguing that reliance on AI for content filtering will only result in over-censorship, and I have to say (rare though it is) that I tend to agree with the organization that it seems almost impossible to distinguish between, for instance, “hate speech” and a discussion about “hate speech.” Where the EFF and I part on this subject is that they’ve already concluded that Facebook has an obligation to free speech, while I view this current dust-up as a catalyst for, perhaps, finally addressing that unresolved assumption.
Still, it seems damn difficult to reconcile the fact that social media adds an especially volatile fuel to the political tinderbox while Zuckerberg sincerely hopes that Facebook will be an “engine of good.” Maybe Facebook will ultimately have to answer Cruz’s question by saying that it is a publisher, and that it has both a right and a responsibility to cultivate whatever community its leadership deems to be a “social good.”
Yes, this would obliterate the liability protections established by both the CDA and the DMCA, but maybe there are remedies other than a blanket shield for platforms that achieve the size, scope, and influence of a Facebook or a YouTube. After all, if Congress is actually trying to achieve anything in this investigation—if this isn’t just political theater—their questions imply a new paradigm for public/private cooperation in cyberspace. As described in a recent post, we have yet to attempt the unprecedented balancing act between the kind of public commons/private community that a Facebook truly is.
Predictably, the EFF published a post arguing that reliance on AI for content filtering will only result in over-censorship
Nothing dictates that AI is solely used for this. What the EFF are saying is that Facebook, Google, Twitter et al with all their $billions can’t be arsed to actually employ people to filter out the AI or user flagged stuff. That it eats into their bottom line. Thus we should suck up the crap so that big tech can make a few more bucks.
Thanks, John. I’m sure that’s a piece of the puzzle. I have colleagues who think Silicon Valley are promoting AI for other reasons–as in literally promoting it because they have products they hope to sell to the DOD, et al. But that’s a whole other conversation. Mostly, I suspect this is all theater–that Congress wants to look angry but probably will not take any meaningful action, partly because it’s tough to identify what that would look like, even without partisanship. It is remarkable (or not) though that the EFF only ever plays one note here–that of “over censorship.” It seems that no matter what the real-world results look like, they cling with religious zeal to the idea that more, unfettered “speech” is the answer.