Techdirt Dissembles on FOSTA in Rebuke of Kristof

In a recent post on Techdirt, Mike Masnick calls columnist Nicholas Kristof a hypocrite based on a narrative Masnick just plain made up.  On December 12, Kristof published a brief column in The New York Times with a picture of a 12-year-old girl who is starving to death as a victim of the US-backed, Saudi-Arabian war in Yemen.  The girl is naked but for a diaper and a bandage on her foot, and the image of her skeletal, wasting body is truly humbling, which is why Kristof says he devoted so much of the page to the image itself.  

After the story was posted on Facebook, the social platform apparently kept deleting the photograph, which prompted the following tweet from Kristof on December 16:

Facebook seems to have repeatedly blocked the photo of Abrar that went with my column:  Come on, Facebook! If you want to end these horrifying images of starving children in Yemen, then help end the U.S.-backed Saudi war that causes the starvation.

Kristof’s complaint was then seized upon by Masnick, who concocted a typically sarcastic “gotcha” on the premise that because Kristof backed the counter-sex-trafficking legislation known as FOSTA (Fight Online Sex Trafficking Act), he has no right to “whine” about Facebook removing this photo for its “sexual content.”  True to form, Masnick smugly alleges that Kristof knew nothing about how FOSTA worked, despite the fact that Masnick grotesquely misrepresents the law, as well as the nature of Facebook moderation, in his post.

Without even getting into FOSTA, anyone who has been on Facebook for the past decade or so knows that the platform has often removed images—even fine art—that some moderator believed violated its “community standards.”  Facebook has been making these, often laughable, mistakes since long before anyone introduced the legislation that became FOSTA and which passed into law in April of this year.  In fact, Masnick’s recent post cites one of his other posts from 2016 criticizing Facebook for censoring the iconic, Pulitzer Prize winning photo of the naked Vietnamese girl running from a napalm strike.  

Notably, the removal of that famous photograph was actually mentioned in the documentary The Cleaners, which I wrote about in November, and which profiles the Philippines-based  moderators to whom Facebook has outsourced most, if not all, of its “community standards” oversight.  The documentary reveals a melange of human fallibility in the decision-making behind content moderation, and Kristof’s photo might have been repeatedly removed for being “disturbing” rather than “sexual.” 

Regardless, the broader point is that millions of images a day are processed by these young moderators—and they are required to meet quotas—whose culture is not grounded in American principles of speech, press, etc., and it is almost impossible to generalize about their motivations and judgment calls.

At the same time, even if, in the most depraved imagination, someone could identify Kristof’s photo of this poor child as “sexual,” then it would simply violate child pornography laws, which predate FOSTA, predate Facebook, and even predate the birth of Mark Zuckerberg.  Yet, somehow The New York Times published the image, which nobody seems to have confused with pornographic exploitation.  All of which is to say that neither the Facebook moderation regime nor Kristof’s specific complaint about the photo, which no sane person could confuse as “sexual,” has anything to do with FOSTA.

As explained in several posts, what FOSTA does is affirm that no internet service provider is automatically immunized against criminal or civil allegations of contributing to sex-trafficking.  FOSTA does not mean that a plaintiff who brings a claim has any less burden to prove a platform’s culpability in that crime.  (Y’know, the way the law works.)  In fact, all one needs to do is look at the volume and nature of the evidence gathered against Backpage to see that proving a contributory role in sex-trafficking takes a hell of a lot more than hosting some “nudity.”

While it is possible that, in an abundance of caution after passage of FOSTA, attorneys at Facebook recommend simply removing anything that can even remotely be deemed “sexual,” it is also evident that the platform was generally doing this long before FOSTA.  Next, the platform will, and should, remove material that is patently child pornography.  And finally, the attorneys at Facebook are well aware that hosting content which may be used as evidence of “contributing to sex-trafficking” is a distinct and high bar for a would-be plaintiff to meet.

So, it is a leap and a half to allege that platforms are now over-censoring as a result of FOSTA, to say nothing of the current reality that Facebook has way bigger content moderation problems right now.  In this regard, I think the folks at Techdirt, and everyone else, ought to be more concerned that Facebook cannot seem to distinguish between a third-party like The New York Times and just some other account holder.

It ought to be a simple enough, internal practice to determine that if a mainstream news company—which is also not immunized against allegations of illegal conduct—can publish an image without legal jeopardy, then Facebook can safely host the same image.  Why this does not appear to be the case has everything to do with the platform’s overall management and nothing to do with FOSTA.  

I’ll leave it to the judgment of the reader to consider Masnick’s labeling Kristof as having a “savior complex” for his interest in starving children and trafficking victims.  But given the choice between a guy who wants to save kids and a guy who wants to save legal liability shields for mega-corporations, well, let’s just say Mike may not make the Nice list this Christmas.

Really, DON’T Believe Anything You See on the Internet

When that cliché first entered our consciousness, it wasn’t really fair. The internet between the mid-90s and the mid-aughts wasn’t what it is today. It actually was just a dumb pipe through which content could could be delivered from creator to consumer in a new way. It was silly to imply that one should not believe a news story published by the Washington Post just because it was on a screen instead of  paper — and that principle still holds true for most professional journalism.

But now, every legitimate news source swims in the same stream with all the garbage—from raw clickbait to lazy aggregators to hackers purposely trying to exploit underlying divisions in democracies—and the tools of manipulation are so sophisticated that many of the manipulators themselves don’t have to be. With a little practice using software that anybody can steal, a kid can create a video that makes it look like Hillary Clinton said that “all veterans are pussies,” and…well, here we are.

“One of the things I did not understand was that these systems can be used to manipulate public opinion in ways that are quite inconsistent with what we think of as democracy.”

That’s what Alphabet (Google parent company) Executive Chairman Eric Schmidt said, recently quoted in an article on FastCompany. And in keeping with the theme of this post, I don’t know what to believe. Were Schmidt and the rest of the leadership at Google honestly so drunk on their own utopian rhetoric about how wonderful their systems are that they failed to imagine—to say nothing of observe—how their products could be toxic for democracy? Or did they recognize it and not care until they were forced to care amid the fallout from the investigations into Russian meddling?

Facebook’s founding president Sean Parker—he was also the co-founder of Napster—told Mike Allen of AXIOS in a recent interview that Facebook was designed to “exploit a vulnerability in human psychology” in order to keep people on the site as much as possible. Parker told Allen that the creators of Facebook understood what they were doing and did it anyway, though perhaps did not quite imagine what the results would be when a billion people voluntarily spend hours in Zuckerberg’s ant farm. “…it literally changes your relationship with society, with each other … It probably interferes with productivity in weird ways. God only knows what it’s doing to our children’s brains.”

How much has changed in such a very short time. It seems like only yesterday the cheerleaders of Silicon Valley, with all the confidence of Camel-smoking doctors, kept telling us just how good their products were for democracy and for society overall. All this goodness was packaged into a single generic word innovation, and anything that stood in the way of innovation—like maybe the rule of law—was bad. Now, all of a sudden, we hear a lot of “Wow, we had no idea our systems could be used to totally fuck up the world! We’ll get some people on that right away!”

Of course, these companies either will not or cannot fully address the underlying reasons why their systems can be so toxic; and Parker put his finger on it when he admitted that Facebook was designed to take advantage of human folly. Facebook may clean up its act in certain regards—I actually believe Zuckerberg wants to—and Congress may enforce some effective regulations upon these platforms; but none of this will address the flaw in human nature that makes us more susceptible to emotional triggers than we are to reason and information. That’s why the underlying promise of the information age—that information can only have a moderating effect on discourse and interaction—is proving to be untrue.

There’s something fundamentally harmful about taking complex topics and issues and transforming it all into advertising, but that’s essentially what a platform like Facebook or Twitter does. “The sad truth is that Facebook and Alphabet have behaved irresponsibly in the pursuit of massive profits,” writes Roger McNamee for The Guardian. “They have consciously combined persuasive techniques developed by propagandists and the gambling industry with technology in ways that threaten public health and democracy. The issue, however, is not social networking or search. It is advertising business models.”

McNamee, who is identified as an early investor in Google and Facebook, describes how the advertising revenue models of these platforms drive, for instance, Facebook to deliver content based on user preferences, creating feedback loops called “filter bubbles.” People have been writing about the filter-bubble problem for several years now, but I suspect the problem is far too subtle to expect that the platforms themselves, with or without legislative mandates, will solve it.

Amid the recent flurry of allegations of sexual assault, satirical posts have appeared on Facebook with photos of Tom Hanks and leads saying, “Dozens of women come forward to…” And then, you click on the story, and it completes, “…say that Tom Hanks is a really nice guy.” Variations on this gag appear all the time, like the reports that Keith Richards is still alive. But you can bet the beer money that any number of people just scrolling through a feed on their phone, perhaps waiting in the supermarket line right next the old-school tabloids, will come away with the impression that indeed Tom Hanks was implicated in some sexual abuse claim. Then, the rumor gets repeated to a friend, and that’s more or less the state of “information” in the digital age. It’s the National Enquirer at “Google scale.”

According David Roberts, writing for Vox, America is in the middle of an epistemic crisis, suggesting that at least many citizens are beyond the problem of separating fact from fiction and are instead living in a world in which facts simply don’t matter. It is a mindset he calls “tribal epistemology—the systemic conflation of what is true with what is good for the tribe.”

For the time being, analysis of the online media universe reveals this problem is more prevalent on the political right (see support of Roy Moore even if he did assault a teenager), but the political left is hardly immune to this kind of tribalism. In fact, this blog was inspired five years ago when I witnessed this exact behavior among left-leaning friends, who were willing to share false information because it supported the outcome they believed to be right. So, although it is somewhat encouraging that this year marks the turning point when internet platforms will no longer be given a free pass — either by lawmakers or the public — to simply do what they want “for the greater good,” that hardly addresses how we individually and collectively will learn to cope with “God knows what’s happening to our brains,” as Parker puts it.

Information Collapse

Naturally, I check the stats on this blog and am always curious to see referring sites and related comments. But this week, I’m reminded what an information clusterfuck the Internet can be with the discovery that a post I wrote about malicious editing on Wikipedia is now a cited footnote on a page at its backwoods, idiot cousin known as Conservapedia.  The page itself was about liberal bias on Wikipedia, and I couldn’t imagine anything I’d written that would support this kind of criticism; but there it was — footnote number 18 right below the heading blaming Wikipedia for, of all things, Anti-Christianity.  And the fact that they’re citing the words of a confirmed atheist is the lesser irony in this case.

Apparently, the rogue Wiki editor, Robert Clark Young, outed by Andrew Leonard at Salon.com and referenced in my post, included among his furtive revisions the “cleaning up” of articles which contained any positive reflections of paganism.  As such, Conservapedia’s beef is not that Young was a mischievous butcher of articles in general, but that Wikipedia ultimately banned him at least in part for doing his christian duty by scouring pro-pagan language. Neither my article nor Leonard’s makes mention of this particular angle related to Young’s antics; and if the pagan story is true, it’s the first I’ve heard of it.  Granted, the Conservapedia article doesn’t reference anything pertaining to the thesis of my post, only that it substantiates Young’s having been banned; although this is a paradoxical citation in support of an otherwise mundane fact.

Entitled Montag’s Grin, my piece (should any wayward christian zealot wander over here) contemplates the potential we have to digitally “burn” the books through round-the-clock, unchecked, amateur revisionism rather than with fire. So, the real irony in this case is that Conservapedia is probably one of the best examples of exactly what I meant when I wrote the post — that the tools and collectivist ideals behind the founding of Wikipedia do not necessarily have to produce better information and a smarter world. It was inevitable that an ultra-conservative, funhouse mirror version of Wikipedia would come to be, just as FOX News was inevitable the moment Ted Turner set out to prove that 24hr news could be a business.  And the Internet has only exploded and accelerated the folly 24hr news set in motion such that news is now even further segmented according to bias, has to provoke or entertain just to attract fleeting attention, and demands a rate of production that can only degrade the practice of investigative journalism.

On a related subject, I was interested to see that Popular Science recently removed comment threads from its website after concluding a study that indicated comments are actually bad for the advancement of science. This makes sense. Because we have elevated and monetized even the most base forms of discourse, we have consequently fostered an environment in which, under the guise of fairness, we continue debating on a national scale even settled sciences like Darwinian evolution. Quote that Conservapedia!

But with regard to any form of criticism of Web 2.0, those of us doing the criticizing are often accused of being anti-technology or anti-future, of trying to stuff the innovation genie back in his proverbial bottle. This isn’t a rhetorical tactic employed solely by common trolls, but also by the corporate leaders of the technological empire.  In any given debate about the application of digital technologies, the vested interests tend to sow a kind of fear by setting up the false choice of tech vs. no tech. If we’re paying attention, this is functionally the same as, If we don’t play by their rules, they’ll take their ball and go home. And we keep falling for it.  To quote Andy Borowitz in response to yesterday’s government shutdown, “If the Internet had been shut down, there would be rioting in the streets.”

Instead of this artificially binary, repetitive, and generalized defense of technology, we might all agree that the tools of innovation are also tools of exploitation; that connecting can include stalking and bullying; that the companion of crowd-sourced is mob-ruled; then maybe we can have an adult dialogue about how, when, and why we use these tools — and quite possibly assert the right to have some say as to how they evolve. Because, if the web is living up to its 20-year-old promise to make us a better informed hive, I have to wonder what this liberal’s words are doing linked to anything on Conservapedia.

Tech utopians herald the technological singularity as a messianic event — a time when man and machine will coexist in a world just beyond our imagination, where super-intelligence is the norm and immortality will be achieved.  It is possible, however, that these very same tools could actually cause the volume of  bullshit out there to keep expanding, acquire tremendous mass, and then collapse into a singularity as it is often defined in space-time — a point of infinite density from which not even light can escape.