Fool me once, shame on Facebook …

In several posts on the subject of Facebook and fake news, I have opined that if we users are going to believe and disseminate bogus information, that’s mostly an us problem, one which Facebook likely cannot solve. In that spirit, there is an extent to which I agree with Mike Masnick’s Techdirt post on May 2 calling Facebook’s plans to rank news sources according to trustworthiness a “bad idea.” At least I agree with Masnick that a human flaw like confirmation bias is a “hell of a drug,” which cannot be counteracted by whatever algorithmic wizardry Zuckerberg & Team may devise.

But other than conceding that people are imperfect, subjective beings, and therefore susceptible to false information, I disagree with the rationale Masnick seems to apply in his critique of Facebook’s plans. He writes, “…as with the lack of an objective definition of ‘bad,’ you’ve got the same problem with ‘trust.’ For example, I sure don’t trust ‘the system’ that Zuckerberg mentions…to do a particularly good job of determining which news sources are trustworthy.”

Perhaps that’s just wordplay, but I find Masnick’s allusion to the subjectivity of trust to be symptomatic of the same populist affliction that precipitated the post-truth world in which we now live. I had hoped that the moment we elected a president who openly lies on Twitter, that this might at least serve as a clear and profound rebuttal to the cyber-utopian mantra that everything—including journalism—needed disrupting. Because if trustworthiness in news is not, on some level, objectively quantifiable, then all journalism must devolve to the exigencies of confirmation bias.

A functioning and humane democratic society depends on limits to democracy itself—on deference to expertise based on certain objective criteria to decide when that deference has been earned. It is essential that a reporter write, This Thing Happened—or even Here’s Why This Matters—and that a plurality of reasonable people accept the report as reliable based on objective (if subtle) metrics. Years of experience, background, track record, tone and style, and, yes, the organization a reporter works for should all factor into this assessment. So, I reject the proposal that “trust” is nearly so subjective as “bad” in this context. The integrity of a news report is not a matter of taste. Yet, Masnick writes …

“Facebook should never be the arbiter of truth, no matter how much people push it to be. Instead, it can and should be providing tools for its users to have more control. Let them create better filters. Let them apply their own “trust” metrics, or share trust metrics that others create.”

Call me a curmudgeon, but how is “applying one’s own trust metrics” any different from the same confirmation bias problem that social media tends to exacerbate in the first place? Masnick’s solution appears to be more confirmation bias, resembling the cliché that insists “more speech is the only solution to bad speech.” If that premise was ever true (and I have my doubts), it has been obliterated by the phenomenon of social media where more is often the enemy of reason.

Masnick is right, of course, that users who like Infowars are going to respond negatively if Facebook ranks that platform as less trustworthy than The New York Times or Wall Street Journal; but that’s a business problem for Facebook—one I could care less about because Infowars IS objectively less trustworthy than those news sources. And lest anyone think that’s liberal bias talking, I’ll say the same thing about Occupy Democrats or any of the other non-news sources my friends link to all the time.

These platforms don’t deserve equal footing with actual journalism, and if Facebook wants to rank news sources, fine. Whatever. I’m probably as skeptical as Masnick that it will do much good in the grand scheme of public discourse, but I think he exaggerates when he calls Facebook an “arbiter of truth.” This sounds more like the blogger who tends to oppose platform responsibility full stop than a complaint about what Facebook is doing wrong in grappling with its role as a conduit of news. In fact, it’s hard to fathom exactly what Masnick proposes as a solution when he writes, “The answer isn’t to force Facebook to police all bad stuff, it should be to move back towards a system where information is more distributed, and we’re not pressured into certain content because that same Facebook thinks it will lead to the most ‘engagement.’”

That reads like the suggestion is Facebook should not be Facebook, which is probably a non-starter as far as the shareholders are concerned. Instead, I tend to think that Facebook should be recognized for the flawed, highly-manipulated, walled-garden it is and placed in its proper context—as an activity to be moderated like video gaming or junk food. Because with or without rankings, we really have no idea what the psychological effect is of just scrolling past images and headlines that trigger dozens of subconscious emotional responses in a matter of minutes. Meanwhile, to the extent that Facebook remains a source of news and information, if ranking means I’ll encounter The Daily Beast more often than The Daily Democrat, I’ll count that as a win.

Cruz Asks Zuckerberg the Section 230 Question

During Tuesday’s Joint Senate Committee hearing, as Mark Zuckerberg kept promising to take better control over content on Facebook, Senator Ted Cruz (R-TX) asked the CEO point blank if the site is a neutral platform or a publisher. Cruz acknowledged the company’s right to act as a publisher but also alluded to the fact that its liability protection under Section 230 of the Communications Decency Act is based on the fact that, as a host of user-generated content, Facebook is presumed to be a neutral platform.

It was a little surprising when Zuckerberg said he’s not familiar with the statute that universally shields his company from most forms of liability, but Section 230 of the CDA is just that. As explained in an older post, this statute broadly immunizes websites that host user-generated content against civil and criminal liabilities that may arise from users’ online conduct. It is in fact so universally applied as a defense that on Wednesday, FOSTA (Fight Online Sex Trafficking Act) was passed in order to clarify that Section 230 was not meant to shield site owners from liabilities stemming from sex-trafficking minors.

But the real bee in Cruz’s bonnet provoking his question is his general belief that social media platforms censor “conservative” content while favoring “liberal” content. I have no idea whether there’s any data to support that allegation, but I doubt the senator has the data himself, or he probably would have alluded to more than anecdotal evidence during the hearing.

Regardless, Cruz’s line of inquiry, without necessarily meaning to, gets to the heart of just how complicated Facebook’s current challenges may be—that is if they really intend to address them. It’s hard enough to define “liberal” and “conservative” these days, but that seems like child’s play compared to expecting Facebook to draw lines for appropriate censorship that a majority of users will agree are the right lines, independent of our political opinions.

I’m inclined to believe Zuckerberg when he says he wants Facebook to be an engine of social good, but for most organizations, striving for that goal usually requires making a decision about what is and is not good and then earning the support of those who agree and accepting the opprobrium of those who do not. This is a fundamental problem with being a so-called neutral platform for social good: there’s nothing neutral about our diverse opinions about goodness. Plus, it’s the nature of politics to cross lines of decorum and truth; and social media is a very cost-effective means of provoking emotional responses to messaging on just about any topic.

So, it’s easy for senators to allude rhetorically to a consensus about where the lines are for internal, corporate censorship, but I am skeptical that such a consensus actually exists for us Americans, let alone Facebook’s majority non-American users. And the hotter the issue, the more jagged the lines are going to be. Plus, social media algorithms respond to popularity; so an issue like guns, for instance, may naturally trend in opposition to a guy like Cruz if in fact most Americans favor regulation.

On that topic, if a friend takes and posts a photo of a billboard in Louisville that says “Kill the NRA,” will that be that someone’s (or some AI’s) definition of inciting violence? Probably. According to USA Today, when that billboard appeared in February, the NRA’s Facebook page posted a photo of it, saying the billboard was, “a wakeup call. They’re coming after us.”

From a First Amendment standpoint, neither the photos of the billboard nor the NRA’s response warrants censorship, and perhaps this would be true of Facebook policy as well. Or Facebook could make a decision that both the billboard photo and the NRA response cross some line in the violence category, although it seems very hard to completely remove the rhetoric of violence when the issue itself is weapons.

Throughout the hearing, Zuckerberg consistently reiterated plans to eventually deploy AI to help weed out toxic content; and although this may address the manpower challenge of moderation, it doesn’t help answer the more nuanced problem that we as a society do not have a common definition of what content would qualify as toxic. Does this mean we would cede that ethical calculus to the AI, which is eerie on a whole other level?

Predictably, the EFF published a post arguing that reliance on AI for content filtering will only result in over-censorship, and I have to say (rare though it is) that I tend to agree with the organization that it seems almost impossible to distinguish between, for instance, “hate speech” and a discussion about “hate speech.” Where the EFF and I part on this subject is that they’ve already concluded that Facebook has an obligation to free speech, while I view this current dust-up as a catalyst for, perhaps, finally addressing that unresolved assumption.

Still, it seems damn difficult to reconcile the fact that social media adds an especially volatile fuel to the political tinderbox while Zuckerberg sincerely hopes that Facebook will be an “engine of good.” Maybe Facebook will ultimately have to answer Cruz’s question by saying that it is a publisher, and that it has both a right and a responsibility to cultivate whatever community its leadership deems to be a “social good.”

Yes, this would obliterate the liability protections established by both the CDA and the DMCA, but maybe there are remedies other than a blanket shield for platforms that achieve the size, scope, and influence of a Facebook or a YouTube. After all, if Congress is actually trying to achieve anything in this investigation—if this isn’t just political theater—their questions imply a new paradigm for public/private cooperation in cyberspace. As described in a recent post, we have yet to attempt the unprecedented balancing act between the kind of public commons/private community that a Facebook truly is.

Wrangling With the Facebook Problem

With Mark Zuckerberg set to testify today on Capitol Hill, and revelations last week that the Cambridge Analytica data breach is now estimated to have affected nearly 90-million users (up from around 50 million), there seems to be no shortage of theories as to how to solve the “Facebook problem.” Congress will ask Zuckerberg what Facebook’s leadership knew about the abuse of its data, when they knew it, and what the company plans to do going forward to protect consumers. Regulatory solutions have their limits, of course, and may even exacerbate a problem if legislators fail to properly grasp the underlying issues. Still, the conversation is long overdue on themes like how much data is being gathered, by which corporations, and for what purposes.

For the past few weeks, Zuckerberg has been contrite, even self-flagellating, which is admittedly a refreshing change from the standard arrogance of Silicon Valley executives. But that’s mostly theater. At best, Congress can do what Congress does, which is to tell Facebook to clean up its act, or they’ll clean it up for them. Meanwhile, there seems to be general consensus among experienced technologists and tech writers that there are limits to how much Facebook can be repaired. “We cannot have regulators trim a beast as if they were barbers and call that change,” writes Jaron Lanier in an editorial advocating the wisdom of those willing to delete Facebook from their lives.

Many responses and proposals to the Facebook fallout have been variations on the theme that the internet (as if it were a conscious being) must return to some idealized, pre-commercial set of values people seem to believe were present 20+ years ago. One might even say that these voices insisting we “make the internet great again” are guilty of a related, ahistorical folly that cannot sensibly answer what again quite means in that sentiment. For instance, a recent article by Tim Wu for The New York Times manages to criticize Facebook while alluding to a familiar refrain of cybernetic idealism at the same time. He writes …

“From the day it first sought revenue, Facebook prioritized growth over any other possible goal, maximizing the harvest of data and human attention. Its promises to investors have demanded an ever-improving ability to spy on and manipulate large populations of people. Facebook, at its core, is a surveillance machine, and to expect that to change is misplaced optimism.”

It’s not that I disagree with that description—it’s irrefutable—so much as it is perplexing to imagine that anyone ever believed there might have been another “possible goal” Facebook was going to pursue. While Zuckerberg was still in middle school, I was in meetings where major ad execs were counting on exactly the kind of consumer-specific targeting that was finally made possible when Google and Facebook figured out how to get us to share gigabytes worth of personal information without minding. The “surveillance machine” was the golden goose investors were banking on from the moment the internet became publicly accessible.

Wu’s answer to the problem is good old-fashioned competition. Asserting that the web’s tendency to foster monopolies is not a matter fate, he states …

“…the real challenge is gaining a critical mass of users. Facebook, with its 2.2 billion users, will not disappear, and it has a track record of buying or diminishing its rivals (see Instagram and Foursquare). But as Lyft is proving by stealing market share from Uber, and as Snapchat proved by taking younger audiences from Facebook, ‘network effects’ are not destiny.”

While I think Wu is right to say that a competitor could theoretically do Facebook better (e.g. be better stewards of our data), I remain skeptical that “network effects” are not inevitable when it comes to certain types of platforms. Because if New Facebook came along offering the same features, plus new consumer-protection benefits, we’re all—I mean all of us—migrating to the new platform, leaving Old Facebook to go drink with MySpace.

Of course, with 2.2 billion users and a market cap of nearly $460 billion, the company in the best position to become New Facebook is Facebook. Regardless, most of us only need one time-sucking, data-gathering, cyber-water-cooler in our lives—if we really need any at all—so I still believe certain internet monopolies are inevitable.

If mitigating data abuse by means of competition is what Wu is truly advocating, his references to Lyft and Snapchat seem to sidestep some very tough questions. For instance, a ride-hailing app/service is not remotely comparable to a social media platform. Lyft and Uber are transaction facilitators, and the consumer only benefits by having multiple players compete to provide one service—a ride—on an as-needed basis. That model is simply not analogous to the reasons users spend time and effort contributing all of the content on a social platform.

As for Snapchat, it’s true that my teenager tells me she and her friends are there because “Facebook is for old people,” but at 150 million users, the platform is hardly proof that Facebook’s network effect is not inevitable. Meanwhile, I personally think it’s anybody’s guess what this next generation of users is going to expect or want from social media as they become young adults. This includes growing bored with the whole enterprise and bailing.

Competition is a good thing, but Wu’s generalized appeal to market forces as a response to the “Facebook problem” echoes rhetorical allusions that have been made at various times in the context of copyright enforcement online. For instance in mid-2016, during hearings about the DMCA, critics of any proposal to introduce a “takedown/staydown” provision insisted that the cost of implementation would be so high that it would entrench, for instance, YouTube as the monopolistic social video platform.

This line of reasoning has always lacked integrity for completely ignoring the various market forces, including the network effect, that sustain YouTube’s dominance. I suspect Wu is making a similar error in this case, perhaps oversimplifying the challenge. As just one prosaic example, I am very much drawn to Lanier’s sentiment when he writes, “…those who have had accounts and then deleted them are true pioneers. They will see things and learn things that are new in the world.” Indeed. But if I’m being honest, I discovered his article because a friend shared it on Facebook.