Is a Revenge Porn Bill Next?

When nude photos of celebrities were leaked and distributed all over the internet in 2014, Jennifer Lawrence, as one of the victims, called it a “sex crime.” Meanwhile, the idea that the platforms themselves bore much responsibility to remove the image was met with mixed responses. The leadership at Reddit was so high on the fumes of its own utopian bullshit that they compared governance of the site to that of a democratic nation which should not impose moral choices on its citizens. Into that bro-publica climate, Representative Jackie Speier (D-CA) introduced a bill in July 2016 that would make “revenge porn” a federal crime. The usual defenders of the web raised the same red flags, asserting that even a well-intended bill of this nature would lead to over-censorship online. Then, little was heard about this proposal, except perhaps inside the Beltway.

But suddenly, the landscape is very different, and I would not be surprised if we see movement on some type of “revenge porn” bill in 2018. In light of the head-spinning litany of sexual-assault allegations in the news, the general dilution of Silicon Valley’s political clout, and what seems like the inevitable passage of the SESTA bill, Rep. Speier’s bill might make relatively smooth progress toward ratification next year. If nothing else, it’s easy to imagine Congress passing this kind of legislation in a scramble to get on the right side of the historic shift we’re witnessing with regard to sexual harassment in every context.

Meanwhile, you might have missed the news that Facebook’s recently proposed an internal “solution” to combat revenge-porn, which was appropriately scorned, if not outright mocked, because it requires trusting their “trained” team with your intimate photos so they can protect you. These are the same guys who couldn’t do the math on Russians buying American political ads with rubles. Activist and author Violet Blue wrote a great piece for Endgadget describing why Facebook’s counter-revenge-porn proposal is not wearing any clothes. “The process presumes the victim has these photos in the first place, and cavalierly ignores that this person is living in a nightmarish hellscape trauma that is in no way re-experienced by handing the instrument of their terror to an anonymous, unaccountable, possibly grey alien Facebook employee,” she writes.

The Speier Bill

It’s actually a misnomer to call H.R. 5896 a “revenge porn” bill because revenge porn is a specific act, usually perpetrated by angry ex-boyfriends who get back at women who’ve broken up with them by distributing nude or sexually-explicit imagery they might have made together as a couple. Speier’s bill, titled the “Intimate Privacy Protection Act,” bypasses the issue of motive altogether and merely states that anyone who distributes intimate images—the language defines these explicitly—of adults with “reckless disregard for the lack of consent” of the subject could potentially face federal charges.

Often, the harm does not end with mere embarrassment. Instead, the images may serve as the predicate for a sustained, emotional assault by a male cyber-mob hounding a female victim, labeling her a “slut,” “bitch,” “whore,” and so on. Cites that trade in unauthorized intimate images may extort payments from victims for removal of their images, but there is little to stop the images from migrating virally once online. As such, remedies for removal are nearly impossible, and any effort on the part of the victim to extricate herself from the “hellscape,” as Violet Blue puts it, is more likely to exacerbate the emotional trauma than to ameliorate it.

As an aside, yes, every kind of sex education in the world ought to include a segment on the hazards of making intimate images with networked devices. It’s hard to believe that anyone is still naive enough to think that images created on smart phones, etc. can be kept private without substantial risk. But that kind of personal awareness does not preclude criminalizing the decision by an individual or entity to distribute these images without permission.

Thirty-eight states plus the District of Columbia have some type of law criminalizing “revenge porn,” but given the geographical irrelevance of internet distribution, it seems only reasonable to proscribe the conduct as part of the federal criminal code. Assuming this bill does see any action in 2018, we can expect the usual suspects—EFF, PublicKnowledge, Techdirt, et al—to cry havoc and declare once again the danger that such proposals pose to free speech on the internet.

Whether this chorus will be joined by major platforms like Google, Facebook, and Twitter may not be as predictable as it would have been just a year ago. I suspect these companies are all recalibrating how to spend their political capital now that public sentiment is less inclined to give them carte blanche; and distributing intimate images without permission is not a “cause” most people are going to support. Regardless, when it comes to the various harms that can be caused via cyberspace, it seems the public is catching on to two realities: 1) that an internet policy doctrine based on the natural goodness of people is utter folly; and 2) the tech companies are in way over their heads.

Sucking Faster:  Is the Tech Backlash Happening or Not?

At the launch of this blog in the Summer of 2012, in the intro to a podcast interview with journalist Christopher Dickey, I cited a print ad from the 1990s for a video post-production facility. In the center of the ad was an old vacuum cleaner, and the headline read:  Without the right talent, high technology just helps bad creative suck faster.  It was a damn good ad that continues to resonate with me in considering the many challenges imposed by the effects of the digital age.  Especially its effects on democratic principles.

While everybody was being self-congratulatory over the “direct-democracy-in-action” defeat of SOPA in 2012, I argued that this alleged triumph was folly in disguise.  The same tools and methodologies employed to deceive the public in that campaign, I asserted, could easily be used by another manipulator to a more deleterious effect on the true foundations of American democracy. I even suggested that Citizens United was child’s-play compared to what that campaign had revealed was possible.  Then in September, Mark Zuckerberg had to issue a mea culpa in response to mounting evidence that Russian agents used Facebook to manipulate the American political process. “We will work to do better,” he said.  Though I wonder if they can.

In the wake of the 2016 election and the sudden discovery of fake news, the mainstream media finally showed up and started criticizing Big Tech.  Most recent examples include articles like “Silicon Valley is Not Your Friend” by Noam Cohen for The New York Times and “Is the Sun Really Setting on Silicon Valley?” by Maya Kosoff for Vanity Fair.  On a range of subjects, from foreign infiltration to advertising integrity to anti-trust and even the election of Donald Trump, numerous reporters have written about, or contributed to, what is broadly being called the “tech backlash” against the industry.

While there may be specific areas in which these platforms are being made out as scapegoats—and it’s me saying that—it is at least encouraging to hope that the public may finally begin to see these corporations as entities deserving scrutiny like any other industrial giant—and not viewed as the front-line defenders of democracy itself.  Just a couple years ago, no matter what underlying issue warranted criticism—copyright infringement, harassment, privacy invasions, etc.—the response was generally the same:  that these platforms are too essential for democratic progress (namely, free speech) to mess with.

I’ve always thought this premise was utter bullshit and still do, but inasmuch as I broadly disagree with this defense of Silicon Valley by Matt Rosoff for CNBC.com, there’s an instructive element of truth in one thing he says.  Rosoff points to polling data, which allegedly reveals that Facebook, Google, Amazon, et al still enjoy high favorability with the American public (i.e. that there is no tech backlash).  These data may be accurate, but they do not justify Rosoff’s generalization that “These companies’ products have helped society more than hurting it.”

Here, Rosoff addresses the topic of fake news, about which he writes, “Facebook was built to make the spread of ideas as frictionless as possible. If those ideas are angry, polarizing, ill-informed, ignorant (call them whatever you want) it reflects the people who are spreading them, not the platform on which they’re spread.” In other words, social media doesn’t make people idiots, people make themselves idiots.  True. But not entirely.

Rosoff is wrong to suggest that a platform like Facebook is an extension of the norm—that the medium itself does not substantially alter the manner in which we consume and relate to news, information, politics, and one another. And just because Mark Zuckerberg did not set out to upend the role of quality journalism in American democracy—he never intended to become a “news” site—that does not mean his platform has not had this effect on society. So, I would make a distinction between blaming Facebook per se and recognizing any number of its unintended consequences.

It is certainly well past time for the major platform owners to put away their childish mantras of disruption and behave like citizens.  They’ve moved fast enough and broken enough things.  Ironically, though, this vogue drumbeat of anti-tech reporting seems overly focused on areas where the major platforms are actually justified in saying there’s only so much they can do.  For instance, Cohen writes, “Facebook has endured a drip, drip of revelations concerning Russian operatives who used its platform to influence the 2016 presidential election by stirring up racist anger. Google had a similar role in carrying targeted, inflammatory messages during the election ….”

Recognizing that social media can be a centrifugal force is a step in the right direction, but we also cannot blame Facebook, or the Russians, for racism in America. That’s a cop out.  At best, we can recognize the ways in which these technologies help racism and other forms of hate and ignorance suck faster.  At the same time, it is insufficient to end the discussion, as Rossoff seems prepared to do, with the tired cliché that these platforms are neutral—that they’re only as good or bad as the people using them.

It isn’t quite that simple because in one way or another we’re all idiots—all ignorant about something and all capable of bias and anger, and certainly not all skilled in expressing ourselves through writing.  So, if the axiom remains true that medium is message, it should be little wonder that Facebook or Twitter is always one comment away from making enemies out of neighbors.  Then add the bots, the trolls, the manipulators, criminals, and the bonafide haters, and of course these platforms are the ideal fora for undermining the principles on which a democracy like ours is founded.

This doesn’t mean we should necessarily turn away from these platforms.  They can fulfill promises like connecting people and provide peer curation of useful news and information.  But it does mean that a new, digital-age literacy is required—one that remains vigilant to the manipulative nature of these platforms and, yes, one which holds the platform owners responsible to the extent that this is possible and effective.  To achieve that, however, requires taking them down off the pedestals of innovation and freedom and treating them like what they are—businesses.

In this regard, the concern should be that while the press has a good time plucking low-hanging fruit (like this story about Google serving fake news to fact-checking sites), the larger policy narrative may remain unchanged—the one which has thus far insisted that internet companies can and should operate outside the normal boundaries of law.  Whether the issue is online support of human trafficking, counterfeiting, fake news, harassment, revenge porn, or mass copyright infringement, the major internet companies continue to insist that their statutory liability shields (written when Zuckerberg was a pre-teen) are essential to our enjoyment of the many benefits their platforms provide.

What each of these individual stories in the “tech-backlash” narrative add up to, though, is the observable truth that these platforms yield plenty of results that are quite hazardous—even to the democratic values they claim to foster.  And as the rule of logic goes, if a premise is false, the conclusion doesn’t follow.  The premise that these platforms produce a net positive for democracy is, so far, proving to be false. Therefore, the conclusion that they must remain eternally shielded from legal liability and social responsibility does not follow.

Fake News Tops Results After Las Vegas Shooting

On Monday, I was up early and first heard about the Las Vegas shooting on the radio in the car. It was still dark, and the winding road thick with fog, lending an eerie mood to the sound of Scott Simon’s voice on NPR reporting what little was known about this latest incident in what is now an epidemic of mass-killings. I had yet to look at any social media, to read anyone else’s opinion or to have the raw facts of the tragedy synthesized through the narrative of gun control, mental illness, terrorism, or any other matter of public policy. There was just the horrible truth of what had happened without theory or explanation. This is how we used to digest the news: Here’s what we know so far. Stay tuned.

Social media abhors a vacuum. And in the hazy interval between breaking reports of an event like the Las Vegas spree-shooting and the revelation of salient, credible details, the pranksters, trolls, and professional liars come out to play. Brianna Provenzano, writing for Mic.com, states that for several hours, “Facebook and Google’s algorithms prioritized fake news” about the Las Vegas shooting. As she puts it “conservative conspiracy sites like the Gateway Pundit lit up with misinformation about the shooter’s identity.” Her article shows one example of a headline naming some poort guy who had nothing to do with the shooting, calling him a “Democrat Who Likes Rachel Maddow, MoveOn.org, and Associated with Anti-Trump Army.”

According to Provenzano, the Gateway Pundit story was among the top results on Facebook before it was removed, but also that once the innocent man’s name was out there, Google searches for it led readers to a 4Chan thread “labeling him a dangerous leftist,” Provenzano writes. She also reports that Google eventually made algorithmic adjustments to replace the 4Chan story with relevant results and stated it will continue to be vigilant in this regard.

It’s right that Google and Facebook took action to quash, or at least mitigate, misleading “news” about such a gravely serious incident, especially bogus reports naming an innocent man as the perpetrator. But for those of us regularly following the policy positions of the internet industry, the hypocrisy here is not missed. For instance, Google can clearly take remediating steps where to no do so would look bad for them; but in other contexts in which search results may facilitate harm, they will expound ad nauseam upon the sanctity of free speech as a universal rationale to leave all data exactly where it is.

For instance, regarding the Equustek case and the Canadian court order to remove links, I fail to see a substantive distinction, in a speech context, between a counterfeiter using search to hijack customers from a legitimate product-maker and a counterfeit news-maker using search to hijack readers from legitimate reporting. In fact, ironically enough, a bogus news story, harmful and revolting as it may be in the wake of a tragedy like Las Vegas, has a better claim to speech rights than a hyperlink which leads consumers to a product or service that is breaking the law.

So, it’s not that I think Google et al shouldn’t make decisions to remove or demote “news” emanating from the adolescent babooneries of places like 4Chan. They absolutely should. Fake news is toxic, and we have enough problems with grim reality without people inventing and believing bogus narratives. But as I’ve argued more times than I can count, speech cannot be the default rationale for a universal laissez-faire policy in cyberspace. And as this story demonstrates, it’s a lie anyway. The major web platforms can and will manipulate, delete, or demote content, or links to content, when they are motivated to do so. Whether these internal decisions are driven by revenue, public relations, or even altruism, speech-maximalism does not seem to factor into their thinking, so there’s no reason why it should necessarily factor into external motivations like a court order.

Meanwhile, we can’t expect Google and Facebook to stop people from being idiots. Readers may remember that after the Boston Marathon bombing in 2013, netizens took it upon themselves to play law enforcement. Not only did they vilify an innocent man whose whereabouts were unknown, but the cyber-mob soon harassed the man’s family, who would then discover that they young man was missing because he had committed suicide.

In the early days of Web 1.0, I rejected the old cliché Don’t believe anything you read on the internet because, of course, the internet really was just a conduit, and a credible source is a credible source. But now that there’s such a bounty of absolute garbage that can either be designed to look legit or can be algorithmically elevated to undeserved prominence, that I think skepticism should be the default approach to nearly every headline. So far, the “information revolution” is at least half oxymoronic. And part of the problem is that it can be very hard to know which half.