Time to Stop Defending the Internet As-Is

With the passing of John Perry Barlow last week, a number of articles and social media comments by internet activists offered variations on the theme that we have Barlow to thank for the internet as we know it. In general, they mean the internet that has thus far been allowed to function as a self-governing industry. While it is certainly proper that organizations mourn the loss of a friend and colleague—and in the case of the EFF, a co-founder—the internet story of the present is one running counter to the utopianism of the 1990s shaped by people like Barlow. And it’s no longer just us “Luddites” saying so.

Despite the high-minded idealism of twenty years ago—predicting that a cyberspace independent of the laws of nations would somehow reveal our latent morality, fairness, and intelligence—the banal consequence is that laissez-faire cyber policy mostly enabled platform designers to convert our crudest instincts into advertising dollars. This finally became apparent to many Americans last year with the revelation that Russian agents have been using Facebook’s ad system to intentionally exacerbate political discord in the United States.

That story was followed by some of Silicon Valley’s most prominent figures coming forward to acknowledge that the apps and systems they helped build are indeed unhealthy—politically, socially, and physiologically. In short, the internet we have is not the one we should hope to keep. And even the owners of the major platforms are now coming to grips with the fact that, despite all previous slogans to the contrary, they are not making the world a better place. In this light, it seems that organizations like EFF et al, who devote considerable effort to defend a state of zero accountability for service providers, are planting their flags on the wrong side of history.

Most recently, a team of tech industry professionals established the Center for Humane Technology. Led by former Google Design Ethicist Tristan Harris and featuring entrepreneur and early Facebook advisor Roger McNamee, the organization’s mission reads like a response to a core problem identified by Jaron Lanier as early as 2010—that web technology was not designed to serve humans so much as humans have been reprogrammed to serve technology.

The new Center emphasizes specific ways in which social media platforms have been purposely designed to exploit basic, psychological vulnerabilities in order to make the experiences addictive and retain round-the-clock user attention. They identify both specific design aspects (i.e. icons, colors, etc.) and business decisions that produce negative effects ranging from personal depression to a political climate steeped in ignorance and outrage. And of course nobody really knows the effects the devices and apps have on the developing brains of young children.

In press interviews, Harris states that many of his friends and colleagues at Google and Facebook are aware of the problems he is now publicizing; but he also notes that the people at these companies are trapped between personal values, yearning to act ethically; and company business models, which cannot easily be abandoned. This is why a key target audience of the Center’s outreach comprises the current and future employees of the tech industry, recognizing that talented designers, engineers, and programmers are in a position to exert pressure on employers to build products that actually serve society.

The fallout from the Russian hack, generally referred to as the “tech backlash,” seems to have produced three main responses. The quietest was a fleeting denial that the backlash was even happening or that Big Tech deserved the criticism. A slightly more prominent theme emerged suggesting that Silicon Valley had sold out its Barlowian ideals for profit, stressing a back-to-basics agenda that fails to acknowledge the flaws in those ideals at their foundation. And the third response, the one now dominating mainstream reporting, is the one where leading industry players are unequivocally admitting that they designed systems which produce some very negative effects.

While the Center for Humane Technology represents a step in the right direction, two ideas occur to me. The first is that more thoughtful technology design alone cannot do our work for us. For example, if our social or political views are too easily manipulated or reinforced by memes or unreliable information sources, Facebook cannot be counted on to redesign its platform to teach people how to be critical thinkers. So, while it is good to know that tech leaders are willing to admit that Facebook appeals to our lizard brains, I suspect they can only do so much to help us transcend our lizard natures. That’s on us.

The other thought is that organizations like this Center might want to reach out to the artists, musicians, authors, etc. who were among the first to identify and discuss the negative effects of the internet. The 20-year-old mud fight over copyright infringement has been cynically mischaracterized as a battle exclusively about money. But this distillation ignores, for instance, the intersection between media exploitation and media gluttony. While it is absolutely necessary to better understand—and even want to change—the mechanisms by which social media platforms foster addiction, it is also worth acknowledging the extent to which those mechanisms still rely on unlicensed exploitation of authors, artists, journalists, musicians, photographers, etc. to retain user attention.

Meanwhile, the organizations still clinging to the maximalist view that society is best served when Google, Facebook, et al are absolved from responsibility and liability will find that message increasingly hard to sell. And it’s about time.

In a must-read article by Roger McNamee, one detail I find particularly striking is how recent the “tech backlash” really is—how much evidence McNamee himself, as a trusted advisor, had to present to Facebook leadership before they finally stopped hiding behind the old saw that they’re not a publisher and cannot be held responsible for third-party content. That mantra is the colloquial version of what the policy folks know as Section 230 of the Communications Decency Act and Section 512 of the Digital Millennium Copyright Act—twin liability shields that are the legislative foundation for many of the problems now being discussed. In fact, I’ll conclude by quoting McNamee, who sums it up perfectly as follows:

“Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.”

© 2018, David Newhoff. All rights reserved.

Follow IOM on social media:

Join the discussion.