Cruz Asks Zuckerberg the Section 230 Question

During Tuesday’s Joint Senate Committee hearing, as Mark Zuckerberg kept promising to take better control over content on Facebook, Senator Ted Cruz (R-TX) asked the CEO point blank if the site is a neutral platform or a publisher. Cruz acknowledged the company’s right to act as a publisher but also alluded to the fact that its liability protection under Section 230 of the Communications Decency Act is based on the fact that, as a host of user-generated content, Facebook is presumed to be a neutral platform.

It was a little surprising when Zuckerberg said he’s not familiar with the statute that universally shields his company from most forms of liability, but Section 230 of the CDA is just that. As explained in an older post, this statute broadly immunizes websites that host user-generated content against civil and criminal liabilities that may arise from users’ online conduct. It is in fact so universally applied as a defense that on Wednesday, FOSTA (Fight Online Sex Trafficking Act) was passed in order to clarify that Section 230 was not meant to shield site owners from liabilities stemming from sex-trafficking minors.

But the real bee in Cruz’s bonnet provoking his question is his general belief that social media platforms censor “conservative” content while favoring “liberal” content. I have no idea whether there’s any data to support that allegation, but I doubt the senator has the data himself, or he probably would have alluded to more than anecdotal evidence during the hearing.

Regardless, Cruz’s line of inquiry, without necessarily meaning to, gets to the heart of just how complicated Facebook’s current challenges may be—that is if they really intend to address them. It’s hard enough to define “liberal” and “conservative” these days, but that seems like child’s play compared to expecting Facebook to draw lines for appropriate censorship that a majority of users will agree are the right lines, independent of our political opinions.

I’m inclined to believe Zuckerberg when he says he wants Facebook to be an engine of social good, but for most organizations, striving for that goal usually requires making a decision about what is and is not good and then earning the support of those who agree and accepting the opprobrium of those who do not. This is a fundamental problem with being a so-called neutral platform for social good: there’s nothing neutral about our diverse opinions about goodness. Plus, it’s the nature of politics to cross lines of decorum and truth; and social media is a very cost-effective means of provoking emotional responses to messaging on just about any topic.

So, it’s easy for senators to allude rhetorically to a consensus about where the lines are for internal, corporate censorship, but I am skeptical that such a consensus actually exists for us Americans, let alone Facebook’s majority non-American users. And the hotter the issue, the more jagged the lines are going to be. Plus, social media algorithms respond to popularity; so an issue like guns, for instance, may naturally trend in opposition to a guy like Cruz if in fact most Americans favor regulation.

On that topic, if a friend takes and posts a photo of a billboard in Louisville that says “Kill the NRA,” will that be that someone’s (or some AI’s) definition of inciting violence? Probably. According to USA Today, when that billboard appeared in February, the NRA’s Facebook page posted a photo of it, saying the billboard was, “a wakeup call. They’re coming after us.”

From a First Amendment standpoint, neither the photos of the billboard nor the NRA’s response warrants censorship, and perhaps this would be true of Facebook policy as well. Or Facebook could make a decision that both the billboard photo and the NRA response cross some line in the violence category, although it seems very hard to completely remove the rhetoric of violence when the issue itself is weapons.

Throughout the hearing, Zuckerberg consistently reiterated plans to eventually deploy AI to help weed out toxic content; and although this may address the manpower challenge of moderation, it doesn’t help answer the more nuanced problem that we as a society do not have a common definition of what content would qualify as toxic. Does this mean we would cede that ethical calculus to the AI, which is eerie on a whole other level?

Predictably, the EFF published a post arguing that reliance on AI for content filtering will only result in over-censorship, and I have to say (rare though it is) that I tend to agree with the organization that it seems almost impossible to distinguish between, for instance, “hate speech” and a discussion about “hate speech.” Where the EFF and I part on this subject is that they’ve already concluded that Facebook has an obligation to free speech, while I view this current dust-up as a catalyst for, perhaps, finally addressing that unresolved assumption.

Still, it seems damn difficult to reconcile the fact that social media adds an especially volatile fuel to the political tinderbox while Zuckerberg sincerely hopes that Facebook will be an “engine of good.” Maybe Facebook will ultimately have to answer Cruz’s question by saying that it is a publisher, and that it has both a right and a responsibility to cultivate whatever community its leadership deems to be a “social good.”

Yes, this would obliterate the liability protections established by both the CDA and the DMCA, but maybe there are remedies other than a blanket shield for platforms that achieve the size, scope, and influence of a Facebook or a YouTube. After all, if Congress is actually trying to achieve anything in this investigation—if this isn’t just political theater—their questions imply a new paradigm for public/private cooperation in cyberspace. As described in a recent post, we have yet to attempt the unprecedented balancing act between the kind of public commons/private community that a Facebook truly is.

Speech Maximalism on SESTA is Madness

This refrain keeps playing over in my head lately:  The EFF and its sister organizations are to cyberlaw as the NRA is to rational gun policy in America.  That seems like a pretty harsh thing to say about a bunch of progressives (and one must even include the ACLU in this discussion), but in the context of policy debate, the maximalism with which these organizations continue to defend the liability shield (Sec. 230) of the Communications Decency Act (1996) on behalf of a single multi-billion-dollar industry is logically comparable to the maximalism with which the NRA has marketed so much ahistorical nonsense about the Second Amendment on behalf of gun manufacturers.

While it’s hard to look away from the circus playing round-the-clock at the White House, it is certainly necessary to look beyond it.  The story of where American democracy is heading is not Donald Trump, though it may be (metaphorically speaking) Elon Musk.  The fact that Musk announced he could power Puerto Rico in response to official U.S. dithering is both intriguing and generous, but it is also a frightening commentary on the condition of the American state.  Even as an idea, Musk’s offer is a subtle harbinger of the tipping point I fear we may be approaching—that the state becomes so dysfunctional, the people turn to the oligarchy of technologists and say, “save us” from ourselves. At that point, American democracy will come to an end. Cue 21st-century American feudalism.

Before we head quite that far into a sci-fi thriller, though, we are truly at an inflection point when the fate of a couple of bills in Congress will say a lot about how much power and influence Google and the other major internet players have in Washington.  H.R. 1865 and S. 1693 (SESTA) would amend Section 230 of the CDA to explicitly prohibit online support of trafficking minors in the sex trade and thus open pathways to both civil and criminal prosecution.  These bills are largely a response to allegations stemming from investigations into Backpage, which the National Center for Missing and Exploited Children estimates is how 73% of all children trafficked in prostitution are bought and sold.

I am told by various contacts in D.C. that Google’s lobbyists—parent company Alphabet now ranks among the top five spenders in the country—have been out in force to kill these anti-trafficking bills in committee. Meanwhile, the EFF and other Google-funded organizations have the unenviable task of telling the American people—once again—that free speech on the web will suffer if we pass legislation designed to help protect children from sex-trafficking.  As explained in a previous post, SESTA proposes a change in the Section 230 statute that is so narrow it could never affect the vast majority of internet users.

Your site would have to be a lot like Backpage, or would have to be as big as Google or Facebook just to be in the orbit of potential liability under SESTA.  Even a pornography site that might inadvertently host video depicting sex acts with trafficked minors (and that’s a big hypothetical) would not necessarily be liable under SESTA because, depending on what actions the site owners were to take, they could still qualify for the safe harbor provisions of Section 230.  Any implication that the vast majority of us who do not run globally substantial sites, or who do not use the web to conduct transactions in the sex trade, will somehow feel a tremor in the force of free speech is rank hysteria.

But Google, with all its wealth and influence, would rather not have so much as a pinhole of liability pierced into the CDA shield—even if it means providing a modicum of legal remedy for victims of sex-trafficking by prosecuting individuals who have nothing whatsoever to do with Google. I can only imagine there must be a few members of the EFF who are either experiencing moral crises over this issue, or downing 10 a.m. shots just to quiet the cognitive dissonance because they’ve got to know their free speech arguments against SESTA are complete hogwash.

Overcoming Free Speech Maximalism

In the same way that the NRA markets a message that guns create freedom, the internet industry has sold a very similar maximalist view that the First Amendment is perpetually strengthened by the immeasurable volume of interactions on the internet.  Just as the American who owns ten guns is not ten times freer than the American who owns one gun, the American who tweets a hundred times a day is not freer than the American who doesn’t have a Twitter account at all.  Nevertheless, when one reads the declarations insisting that every peep uttered in cyberspace is sacred, it is hard to miss the rhetorical similarities between the NRA and the internet activist organizations.

Like anyone with a maximalist view—or a financial stake in espousing one—both the NRA and the EFF reveal a callous disregard for the harm being done by the policies they endorse.  The EFF hasn’t explicitly said “child sex-trafficking is the price we pay for freedom,” but that’s effectively the argument they’re making with their overplayed appeals to the First Amendment in context to SESTA.  Adding further to this irony is a complete disregard for the fact that the internet as we know it is actually making quite a hash of the democratic principles which the protection of speech is meant to serve.

In almost the same manner in which Citizens United undermines the intent of speech by giving a louder voice to financially empowered corporations, the economics of the web do the same thing more broadly and more insidiously.  If it is fundamental to American democracy that the population has access to relevant and accurate information, it is no surprise that the economics of attracting and monetizing web traffic fails to serve this purpose. (Or have I missed something and American democracy is healthier than ever?)  Journalism (i.e. information) is supposed to be the practice of telling people what they need to know while the design of the web we have is fundamentally built to tell people what they want to hear.

Adaptive algorithms that anticipate our interests, biases, and desires are relatively innocuous, perhaps even beneficial, if we’re shopping for toasters; but these designs can be toxic to democracy when we’re “shopping” for news.  In a solid, concise OpEd for Forbes about the folly of current support for Obama-era net-neutrality policies, Fred Campbell calls the internet as we know it “a mess.” “Policies that net neutrality advocates are clamoring to preserve have facilitated the internet’s roles in undermining fair elections, providing a safe haven for sex traffickers, destroying privacy, nurturing the world’s largest information monopolies (e.g., Google, Amazon), subverting free speech, and devastating publishing industries,” Campbell writes, suggesting that we should let the internet be overhauled because it’s hardly living up to the vision of its founders in the 1960s.

Campbell cites a paper by Professor Shoshana Zuboff of the Berkman Center for Internet & Society; Harvard Business School (an organization typically aligned with internet industry views), who calls the current economics of the web surveillance capitalism.  “This new form of information capitalism aims to predict and modify human behavior as a means to produce revenue and market control,” Zuboff writes.  That description certainly rings true with experience and hardly seems to jibe with the foundational assumption that the internet is “the greatest tool for democracy ever created.”

In 1783, in the uncertain period between the end of the American revolution and the establishment of the United States, Alexander Hamilton wrote to John Jay, “It is hoped when prejudice and folly have run themselves out of breath, we may embrace reason and correct our errors.”  He was referring to the many competing forces driving people away from the establishment of a unified nation.  Today, Hamilton could easily be talking about Facebook and Twitter because it would be hard to make the case that the internet is not, on balance, having a centrifugal effect on the electorate.  As such, free speech maximalism is  not only specifically immoral as a response to a bill like SESTA, but it is also generally untenable as a premise for broader debates about cyberlaw.


Image by stawy13

Read Christopher Zara’s Section 230 Article

 

Photo by Pond5.

Christopher Zara, writing for Backchannel, offers an excellent discussion about Section 230 of the Communications Decency Act of 1996.  He provides historical context and a balanced presentation of the challenges that have arisen from the differences between the law’s intent and its application.

“Given how often Section 230 is championed, cited, and showered with superlatives, you might not know there is a raging debate going on about how well the law actually works.”

Of course, the business broadly described as “the internet” was a very different animal in 1996, and as Zara describes considerable detail, we have yet to fully address some of the liability implications that may pertain to an Airbnb-type platform versus those that might pertain to a Facebook-type platform. “Digital rights” advocates, and of course the businesses themselves, vie to treat all platforms equally under Section 230—meaning that Airbnb would be no more responsible for a bad listing than Facebook is for you sharing defaming material.  But is Airbnb truly a web platform hosting third-party content in the same sense as Facebook, or is it a hotel booking service that uses web technology, thus implying a different set of responsibilities never considered under Section 230?

In fact, if you read my last post, and the critical comment about it from Anonymous, he/she correctly points out that Section 230 was created in order to allow platforms to remove objectionable material without invoking a liability.  Zara’s article provides insightful background on this from Senator Ron Wyden (D-OR), co-author of Section 230 with Chris Cox (R-CA) when both served in the House of Representatives.  But Zara also observes that invoking 230 is indeed used as a defense by platform operators to take no action to remove potentially harmful material.

As cyberspace becomes increasingly integrated with the physical world—and as users come to grips with the supposed neutrality of information—we are probably going to hear a lot more about Section 230 in the relatively near future. Christopher Zara’s article is a great starting point for anyone hoping, as I am, to better understand the issues.