Fool me once, shame on Facebook …

In several posts on the subject of Facebook and fake news, I have opined that if we users are going to believe and disseminate bogus information, that’s mostly an us problem, one which Facebook likely cannot solve. In that spirit, there is an extent to which I agree with Mike Masnick’s Techdirt post on May 2 calling Facebook’s plans to rank news sources according to trustworthiness a “bad idea.” At least I agree with Masnick that a human flaw like confirmation bias is a “hell of a drug,” which cannot be counteracted by whatever algorithmic wizardry Zuckerberg & Team may devise.

But other than conceding that people are imperfect, subjective beings, and therefore susceptible to false information, I disagree with the rationale Masnick seems to apply in his critique of Facebook’s plans. He writes, “…as with the lack of an objective definition of ‘bad,’ you’ve got the same problem with ‘trust.’ For example, I sure don’t trust ‘the system’ that Zuckerberg mentions…to do a particularly good job of determining which news sources are trustworthy.”

Perhaps that’s just wordplay, but I find Masnick’s allusion to the subjectivity of trust to be symptomatic of the same populist affliction that precipitated the post-truth world in which we now live. I had hoped that the moment we elected a president who openly lies on Twitter, that this might at least serve as a clear and profound rebuttal to the cyber-utopian mantra that everything—including journalism—needed disrupting. Because if trustworthiness in news is not, on some level, objectively quantifiable, then all journalism must devolve to the exigencies of confirmation bias.

A functioning and humane democratic society depends on limits to democracy itself—on deference to expertise based on certain objective criteria to decide when that deference has been earned. It is essential that a reporter write, This Thing Happened—or even Here’s Why This Matters—and that a plurality of reasonable people accept the report as reliable based on objective (if subtle) metrics. Years of experience, background, track record, tone and style, and, yes, the organization a reporter works for should all factor into this assessment. So, I reject the proposal that “trust” is nearly so subjective as “bad” in this context. The integrity of a news report is not a matter of taste. Yet, Masnick writes …

“Facebook should never be the arbiter of truth, no matter how much people push it to be. Instead, it can and should be providing tools for its users to have more control. Let them create better filters. Let them apply their own “trust” metrics, or share trust metrics that others create.”

Call me a curmudgeon, but how is “applying one’s own trust metrics” any different from the same confirmation bias problem that social media tends to exacerbate in the first place? Masnick’s solution appears to be more confirmation bias, resembling the cliché that insists “more speech is the only solution to bad speech.” If that premise was ever true (and I have my doubts), it has been obliterated by the phenomenon of social media where more is often the enemy of reason.

Masnick is right, of course, that users who like Infowars are going to respond negatively if Facebook ranks that platform as less trustworthy than The New York Times or Wall Street Journal; but that’s a business problem for Facebook—one I could care less about because Infowars IS objectively less trustworthy than those news sources. And lest anyone think that’s liberal bias talking, I’ll say the same thing about Occupy Democrats or any of the other non-news sources my friends link to all the time.

These platforms don’t deserve equal footing with actual journalism, and if Facebook wants to rank news sources, fine. Whatever. I’m probably as skeptical as Masnick that it will do much good in the grand scheme of public discourse, but I think he exaggerates when he calls Facebook an “arbiter of truth.” This sounds more like the blogger who tends to oppose platform responsibility full stop than a complaint about what Facebook is doing wrong in grappling with its role as a conduit of news. In fact, it’s hard to fathom exactly what Masnick proposes as a solution when he writes, “The answer isn’t to force Facebook to police all bad stuff, it should be to move back towards a system where information is more distributed, and we’re not pressured into certain content because that same Facebook thinks it will lead to the most ‘engagement.’”

That reads like the suggestion is Facebook should not be Facebook, which is probably a non-starter as far as the shareholders are concerned. Instead, I tend to think that Facebook should be recognized for the flawed, highly-manipulated, walled-garden it is and placed in its proper context—as an activity to be moderated like video gaming or junk food. Because with or without rankings, we really have no idea what the psychological effect is of just scrolling past images and headlines that trigger dozens of subconscious emotional responses in a matter of minutes. Meanwhile, to the extent that Facebook remains a source of news and information, if ranking means I’ll encounter The Daily Beast more often than The Daily Democrat, I’ll count that as a win.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)