12 Things You Could Still Do If SOPA Had Passed

photo by tomasmikula

Because today is the five-year anniversary of “Blackout Day,” the day millions of users were suckered into doing the internet industry’s bidding for no good reason, the always-relevant BuzzFeed offers us a missive published by the organization ReCreate Coalition called “12 Things You Can Do Because Congress Protected Internet Freedoms,” by which they mean backed off the passage of SOPA/PIPA on January 18, 2012.

But there’s something magical about the twelve things listed; it’s kind of like a palindrome in that it is also a list of things you would still be able to do if SOPA/PIPA had passed. Let’s not equivocate on this matter.  I mean not one of the activities mentioned was in any way threatened by SOPA/PIPA.  And you know how we know this? Because those bills didn’t expand rights or restrict exceptions like fair use under the copyright law. If you engage in any or all of the listed activities and actually infringe a copyright, you’re just as potentially liable right now as you would be if those bills had passed. For reasons known perhaps only to the folks at ReCreate, they chose the following:

1. Share puppy videos instantly to Facebook.

2. Post a breaking news clip on Twitter.

3. Review a new restaurant on Yelp.

4. Comment on an article at a news outlet like Deadspin.

5. Use Wikipedia for that history paper on Alexander Hamilton…

6. Post a funny meme to Reddit.

7. Save a healthy recipe on Pintrest…

8. Repost a motivational quote on Instagram.

9. View and share family photos on Flickr.

10. Write a political opinion blog on WordPress.

11. Post a manequin challenge on YouTube.

12. Listen to a podcast on SoundCloud.

None of these actions inherently requires the use of copyrighted works. Some are actually hard to fathom how such a use is even implied. For instance, it’s pretty tough to share your own family photos and infringe a copyright, which suggests the ReCreate folks really put their A-Team on this little project.  But, don’t kid yourself.  If your political opinion blog includes the publication of a copyrighted photograph used without permission, you’re exactly as liable right now as you were before anyone even heard of the acronym SOPA.  Those bills were aimed at foreign-based, enterprise-scale pirate site operators and required substantial, costly evidence to enforce. It would not have been legally possible for rights holders, under SOPA, to give any more of a damn about private videos and restaurant reviews than they do right now.

The remedies provided by SOPA/PIPA were based on existing practices already used by courts when providing injunctive relief—all of which have been applied in various cases, and all without destroying the internet, the First Amendment, or your ability to “share puppy videos instantly on Facebook.”  Since 2012, sites have been shut down, URLs delisted, and credit card services denied to various types of bad actors; and yet the web keeps humming along in all its mannequin-challenging, motivational-quoting, and funny-meme-making glory.  The anti-SOPA campaign was one of the most effective fake news stories of all time, and celebrating the anniversary of being fooled is well…you finish the thought.

I assumed the buzz in BuzzFeed referred to current events, but perhaps it’s a literal reference indicating that any party, no matter how stoned they are, is free to publish any nonsense they cobble together via their platform. So, I guess we should add a thirteenth item to the list that would also, sadly, still be kosher in a world with SOPA & PIPA:

13.  Click-bait bullshit could still pretend to be information.

 

Posted in Copyright, Digital Culture | Tagged , , , | 10 Comments

BMG v Cox Goes to 4th Circuit Appellate Court

Amicus briefs were filed recently in the 4th Circuit Court of Appeals in the case of BMG Rights Management v Cox Communications. In November of 2014, BMG sued Cox (an ISP) for contributory copyright infringement, and a US District Court found for the plaintiff in December of 2015, awarding $25 million in damages. The suit was based on evidence that Cox was willfully ignoring and/or failing to address the use of its service by repeat infringers.

“Digital rights” groups and (let’s be honest) people who support piracy decried the outcome, which Cox has now appealed. A decision may expected by early Spring, and if the court were to find Cox’s arguments persuasive, this would have a very damaging effect for rights holders—further aggravating the weakness in the DMCA as a mechanism of enforcement.

Having failed in the lower court to convince either judge or jury that Cox had sufficiently maintained its liability shield (safe harbor) under the DMCA, the company now seeks to argue on appeal that the DMCA says something other what it says.  And true to form, Public Knowledge and the Electronic Frontier Foundation have chimed in (via joint amicus brief) to propose that if the Cox ruling is upheld, it could lead to disenfranchisement of people from internet access and…y’know…destroy free speech. Again.

Anyway, let’s review.

Although one might get the idea from general discussion that DMCA is either a blanket liability shield for ISPs or a blanket takedown mechanism for rights holders, it is neither of these things. Instead, the DMCA statutes define the conditions and responsibilities of service providers with regard to users uploading unlicensed, copyrighted material onto their platforms. In simple terms, the law states how an ISP may conditionally retain its safe harbor liability shield, and it is these conditions which tend to get lost in the broader reporting on DMCA-related stories.

A service provider like Cox, which sells internet access to consumers, generally would not be concerned with hosting infringing material the way a platform like YouTube will be, but they can be liable for contributory infringement if the company is aware of subscribers using its service to repeatedly infringe copyright and takes no action to stop the infringing activity. The statute in DMCA §512(i) states that a service provider must have “a policy that provides for the termination in appropriate circumstances of subscribers and account holders of the service providers system or network who are repeat infringers.”  Note that the DMCA explicitly anticipates conditions by which the provider must eventually terminate the accounts of repeat infringers—logically, those who refuse to stop after some amount of warning.

So, for all of EFF’s and PK’s dramatics about the cruelty of account termination—their brief compares it to “cutting off a tenant’s water”—one might get the idea that in writing the DMCA, Congress never imagined such a remedy; but termination is precisely what the law says.  (A more reasonable comparison would be made to the prospect of losing a driver’s license, which can be quite damaging to an individual, but which is also a penalty imposed only after some degree of willful abuse.)  What the DMCA does not say is how much infringement makes a “repeat infringer,” or how an ISP must design its policy for addressing repeat infringers where account termination is a possible consequence.

This ambiguity can be exploited by service providers, and in this case, BMG presented substantial evidence to the court that Cox’s policies seemed purposely designed to avoid taking action against repeat infringers within a reasonable interpretation of the DMCA.  The district court opinion written by Judge Liam O’Grady states, “Unfortunately for Cox, the record was replete with evidence that foreclosed any assertion by Cox that it had reasonably implemented a repeat infringer policy.”

Who’s a repeat infringer?

In its appeal, Cox seeks to argue that “repeat infringers” in DMCA §512(i) can only mean “subscribers who have been found liable for infringement by a court or a jury on more than one occasion.” In other words, BMG’s evidence of users consistently accessing unlicensed material does not make those users “repeat infringers” unless they’ve already been found guilty of infringement in a court of law—more than once.

Cox is relying on a specific interpretation of the word “infringer” in the statute, hoping that the appellate court will agree that a “repeat infringer” can only be an individual who has lost at least two copyright infringement cases in court.  That population may be a number barely large enough to fill a small cafe, which is considerably smaller than the billions of users anticipated by the architects of the DMCA. And because it is obvious to any reasonable person that one can be guilty of a violation without being held liable—if you’re let go for speeding with a warning, it doesn’t mean you weren’t speeding—any reasonable person should conclude that Congress’ use of the word “infringer” in this case was meant to describe individuals engaged in unlicensed access to, or use of, copyrighted material, even if the rights holders do not intend to pursue litigation against them.

It is a truly bizarre argument, which overtly pretends the DMCA is something other than what it is.  The background, intent, and language of the law is known, by the parties involved in its writing, to have been designed as a process by which ISPs and rights holders would collaborate to mitigate infringement—no matter where the users reside or who they are—without costly litigation  For example, a “repeat infringer” under the DMCA can easily be—and often is—a user in a foreign country who has never seen a U.S. court, let alone been a named party in a U.S. copyright case.

The DMCA’s existence is based partly on an understanding that worldwide users would inevitably infringe—either willfully or unintentionally—but that the ISPs and rights holders would have a mechanism for removing infringing files or stopping infringing activity without anyone getting sued—as long as all parties met the conditions in the agreement.

Photo by SergeyF

 Knowledge of Infringement

If you call someone on their cell 3-4 times and they don’t answer, there may be any number of reasonable explanations.  If you call them 50 times and they don’t answer, they’re ducking your calls—or in legal terms, they’re engaging in “willful blindness” by choosing not to hear what you have to say.  So, what about a few million calls?  At issue in the lower court decision was the fact that Cox simply ignored millions of notices sent by Righstcorp on behalf of BMG containing IP addresses and other corroborating information demonstrating repeat infringement by numerous subscribers.

Cox’s rejection of these notices was deemed “willful blindness,” which is the legal equivalent to taking affirmative action to infringe, hence the charge of contributory infringement.  On appeal, Cox contends that the notices, which they chose to ignore, would not constitute “knowledge of infringement” anyway, thus seeking a standard of “knowledge” so narrow that it would effectively excuse all ISPs from adopting any kind of anti-infringement policy as mandated by the DMCA. 

The Betamax Argument

Cox further argues that as a conduit provider, they can only be held liable if they “actively encourage or induce infringement through affirmative acts.”  In this regard, Cox relies on a very broad reading of Sony Corp v Universal Studios (1984), which held that Sony could not be liable for copyright infringements that may be committed by users of its VCRs.  There are several parts to the Sony ruling, including the Court’s holding that because the VCR could be used for “substantial non-infringing purposes,” Sony could not be liable for any infringing uses unless it actively induced or encouraged that infringement by its customers.

Cox now seeks to argue, by the same principle, that because internet access may be used for “substantial non-infringing” purposes, they should be held to the same standard as Sony because they also did not induce its users to infringe.  Once again, Cox’s argument seeks to bypass the terms of the DMCA with an argument that, if upheld, would unconditionally absolve all ISPs of liability unless they promoted infringement in their marketing.

The fact that the internet, writ large, is used for substantial non-infringing purposes is immaterial.  To stick with automotive analogies, just because cargo trucks are used for substantially legal purposes, this has no bearing on the liability of a trucking company, if it were to turn a blind eye to some of its drivers transporting contraband across state lines.

Nevertheless, Cox—with the hyperbolic assistance of EFF/PK—seeks to argue that if this hypothetical trucking company pays a penalty, loses it’s license, or fires the named truckers, that will lead to the end of trucking itself, and we all starve. If 100 Cox subscribers lose their access due to their infringing activity, it has no more bearing on the internet and its billions of users, than if a different 100 subscribers lost their access due to non-payment for the service.

As argued in the brief filed by the Copyright Alliance, “If Cox’s view was the law, then as long as it was not actively inducing or promoting infringement, Cox could throw each and every infringement notice it received straight into the trash, and the “Abuse Group” charged with addressing online piracy could knowingly permit active infringement without creating any risk of liability to Cox.”

To summarize, the Cox argument boils down to the following:  1) repeat infringers are not repeat infringers; 2) even if they were repeat infringers, we could not know they were repeat infringers from the evidence presented; 3) even if they were repeat infringers and we knew about them, we aren’t liable because we didn’t tell them to infringe.

Implications of this Case

This effort to treat the safe harbor as an unconditional liability shield is generally where large ISPs have tried to move the conversation, both in the courts and in the public dialogue. But the DMCA was never meant to provide a free ride for service providers, although it has inadvertently produced that result to a greater extent than anticipated by its authors.

As Copyright Alliance also observes in its brief, if a rights holder the size of BMG has no remedy in a case in which a service provider has been shown to have circumvented the provisions in the law, then independent rights holders truly have no hope of protection whatsoever in the evolving digital market.  Instead, independent rights holders need the DMCA to be made more effective than it is by revising some of the ambiguity in the statutes, which leads to the kind of bad-faith policies on the part of ISPs that are apparent in this case.

Posted in Copyright, Law & Policy | Tagged , , , | Leave a comment

Read Christopher Zara’s Section 230 Article

 

Photo by Pond5.

Christopher Zara, writing for Backchannel, offers an excellent discussion about Section 230 of the Communications Decency Act of 1996.  He provides historical context and a balanced presentation of the challenges that have arisen from the differences between the law’s intent and its application.

“Given how often Section 230 is championed, cited, and showered with superlatives, you might not know there is a raging debate going on about how well the law actually works.”

Of course, the business broadly described as “the internet” was a very different animal in 1996, and as Zara describes considerable detail, we have yet to fully address some of the liability implications that may pertain to an Airbnb-type platform versus those that might pertain to a Facebook-type platform. “Digital rights” advocates, and of course the businesses themselves, vie to treat all platforms equally under Section 230—meaning that Airbnb would be no more responsible for a bad listing than Facebook is for you sharing defaming material.  But is Airbnb truly a web platform hosting third-party content in the same sense as Facebook, or is it a hotel booking service that uses web technology, thus implying a different set of responsibilities never considered under Section 230?

In fact, if you read my last post, and the critical comment about it from Anonymous, he/she correctly points out that Section 230 was created in order to allow platforms to remove objectionable material without invoking a liability.  Zara’s article provides insightful background on this from Senator Ron Wyden (D-OR), co-author of Section 230 with Chris Cox (R-CA) when both served in the House of Representatives.  But Zara also observes that invoking 230 is indeed used as a defense by platform operators to take no action to remove potentially harmful material.

As cyberspace becomes increasingly integrated with the physical world—and as users come to grips with the supposed neutrality of information—we are probably going to hear a lot more about Section 230 in the relatively near future. Christopher Zara’s article is a great starting point for anyone hoping, as I am, to better understand the issues.

Posted in Digital Culture, Law & Policy | Tagged , | Leave a comment

The Accountability of Web Platforms

Network servers in datacenter

 

 

 

 

 

 

 

 

 

 

Photo by scanrail

Online service providers (OSPs) are generally shielded by two major statutes from liabilities that may stem from the content uploaded by users of their platforms.  Section 512 of the DMCA (1998) provides the conditions under which an OSP may avoid liability for copyright infringement, and Section 230 of the Communications Decency Act (1996) covers just about every other kind of content.

In simple terms, any platform that allows users—rather than site owner/operators—to upload content.  Sites like YouTube, WordPress, Facebook, Twitter, etc. are not considered “publishers” under CDA Section 230 and, therefore, remain free from liability for nearly any harm that may be caused by the user-generated content hosted on their sites. So, if a Twitter mob incites assault or violence, Twitter is generally in the clear. If an IS recruiting video inspires a lone-wolf attack, YouTube is not held responsible. If fake news fills a Facebook feed, then Facebook is not responsible for publishing lies or slander because, under the statute, Facebook is not the “publisher” of the material.

“Digital rights” groups defend CDA 230 as an essential protection for free speech online and as a mechanism for the development of the web overall.  In general, this argument has a lot of merit, but these activist organizations are not above straining their support of Section 230 beyond reason at times. As discussed in this post, the Electronic Frontier Foundation came strangely close to defending the alleged criminal activities of the owners of Backpage while seeking to defend the principles of Section 230. In that particular case, the indictment of last October states that the owners of the site took direct action to further capitalize on the illegal sex trade, which they had to know contributed to more than 90% of site revenues.

Hence, the assumed ignorance of the OSP management, upon which the Section 230 shield is based, seems reasonably lost in that case; and EFF’s defending Backpage on principle alone appears to defy common sense.  The Supreme Court is scheduled to consider whether or not to take up Doe v. Backpage during its conference tomorrow.  If the Court agrees to consider the case, expect to hear a lot about Section 230 in the coming weeks.

A Mundane Example

As a very simple example of what we’re talking about, I accidentally called a scam Apple support service one day because I was rushing and because a number for the fake service appeared at the top of Google’s search results.  Fortunately, I realized I’d called a predatory operator and hung up before it cost me anything, but for those who were cheated out of credit card or other information, doesn’t it seem reasonable that Google should be held accountable for having taken fees to place the bogus service in the advertised top spot?  It seems to me they should. But what about monetizing content that may contribute indirectly to assault, battery, or murder?

Pulse Nightclub Suit

In December, a Michigan-based law firm filed suit in Florida against Google, Facebook, and Twitter on behalf of three families who lost loved ones in the Pulse Nightclub shooting of June 12, 2016, where Omar Mateen shot and killed 49 people, making his the largest mass-shooting in US history.  The foundation of the case, led by attorney Keith Altman is that the monetized hosting of content produced by the Islamic State “provided material support to terrorists” in violation of federal law and contributed to the actions taken by Mateen.  The Orlando Sentinel, reporting on the story, quotes internet and communications attorney J.B. Harris stating, “It’s creative. It’s bold. But I don’t think he’s going to succeed under the federal anti-terrorism statute that he cites.”

That sounds about right to my layman’s ear.  In this case, I suspect Altman would have a very high burden, even to connect the IS material to Mateen’s decision to act, let alone to hold the OSPs responsible for the tragedy under that statute.  Moreover, I don’t think the public is going to warm to the idea of accusing web platforms of “providing material support to terrorists,” via third-party content, least of all in the climate we’re now entering.

Nevertheless, the Sentinel notes that attorney Harris speculates that Altman might have a better hearing in a Florida local court as a “strict negligence or liability” case, which does begin to have the ring of some balance to it with regard to alleged liability among the OSPs in this circumstance. I suspect the case would be a long shot either way, but Altman is correct in his observation that the major OSPs have historically enjoyed tremendous freedom in maintaining a laissez-faire approach when it comes to monitoring content on their platforms.

Possible Change in Attitudes?

As speculated in my last post, the bitter taste of fake news and Russian hacking may shift public opinion toward a greater willingness to hold major platforms responsible for content more than they have to date.  In particular, when an OSP earns revenue by hosting harmful content, whether it’s a scam like the one noted above or an IS recruiting video supported by brand advertising on YouTube, we may begin to see some cracks in public support for the “we don’t know” defense, regardless of the liability shields.

With regard to copyright infringement and Section 512, we know that the major OSPs have played an ongoing and repetitive semantic game on the theme that “they cannot know” what’s happening on their sites.  As I’ve said in the past, this argument is especially coy when it comes from Google, which vows to one day know us better than we know ourselves—but apparently will remain ignorant about the content on its own platforms. I don’t think anyone disputes that content moderation poses technical and legal challenges. But so far, the conversation has been skewed toward a bias that any moderation is undesirable because it’s tantamount to censorship; and this has benefitted the platforms by leaving them free to monetize nearly anything.

With cases like Backpage, and perhaps this Pulse Nightclub suit, playing out against a landscape of users coming to grips with some of the inherent flaws of social media platforms, we may see OSPs take more direct, voluntary action to mitigate the use of their services by bad actors.  Or as Charlie Warzel writes, in a related article on BuzzFeed, “…trotting out the ‘But we’re just a digital platform’ excuse as a quick and easy abdication of responsibility for the perhaps unforeseen — but maybe also inevitable — consequences of Big Tech’s various creations is fast becoming a nonstarter.”

Posted in Law & Policy | Tagged , | 3 Comments

The Morning After or Social Media is a Humbug

Looking through window blinds, sun light coming inside.

Photo by photocreo.

Time for a hard look in the mirror?  We’ve been on a social media bender for years, and I’m thinking January 1, 2017 might be the day we begin to sober up and come to grips with its more negative effects.  When I began writing about all this stuff in 2011, it was partly in response to the fact that people seemed too eager to give the internet industry itself a free pass on the ill-effects of several major platforms because the internet writ large is perceived as so essential to democracy. And thanks to social media, the internet became an extension of our egos, much in the same way liquor makes us all good looking and smart.

In January of 2012, a relatively small cadre of internet wonks rallied people to shout down SOPA—a bill almost nobody understood—and progressives in particular congratulated themselves for participating in “true democracy in action.” It scared the hell out of me because it was not democracy in action but industry-backed manipulation disguised as democracy.  Or as just one example of its insidious nature, the campaign was partly driven by the same anonymous denizens of a site called 4Chan, whence come many agitators of the alt-right that people now realize is a thing. My left-leaning friends who helped drive SOPA over the cliff failed to recognize the dark genie they’d let out of the bottle.  Forget that SOPA was not the toxic legislation everyone had been told it was; that’s just a minor, nagging detail. What matters is that the campaign against it was a blueprint for circumventing the democratic process itself.

The capacity to unleash thoughtless reaction in any number of directions is a power we have ceded to social media platforms.  If spurious Trump-tweets are disconcerting to you, I’d note that this was the same pavlovian mechanism at work in the anti-SOPA campaign and is more or less the manner in which we continue to dumb down the most complex issues into bites, memes, and zingers.  Kind of like those big ideas that seem really smart while under the influence, but are best left unfulfilled in the harsh reality of the ensuing hangover. So, here’s a question:  Is a platform like Twitter valuable because people get to respond to what a politician might say, or is it toxic because it gives a politician a round-the-clock platform for riling people up with some insipid one-liner in the first place?  Hint: Twitter is fine for sharing links but a stupid way to discuss real issues. The word twit is right there in the name.

With all the attention the election has focused on fake news and manipulation of information by a foreign power, it has been interesting to observe—at least anecdotally—a renewed sense of vigilance about the sources of information people choose to share or cite on Facebook.  It was not surprising, of course, that some folks wanted to blame the platform operators for failing to weed out fake news. And although it isn’t exactly Facebook’s fault that people are happy to believe nonsense in the first place, the medium is still the message; and it is a medium that instantly rewards what’s popular, not necessarily what is true, decent, thoughtful, or fair.  That was what frightened me about the anti-SOPA campaign—that suddenly being “right” en masse completely overwhelmed common sense, rational analysis, or the exchange of ideas.  The fact that nobody happened to be right was just bitter icing on the cake.

I’ve seen people respond to the fake news problem with the sentiment that they don’t want corporations like Facebook editing what we see online, but the fact is these entities already do edit what we see, but in a manner that serves their advertising and data-harvesting interests. So, while they’re at it, as long as people are going to use search engines and social media for acquiring news and information, the OSPs could be better corporate citizens and take a harder look at the negative effects that their anything-goes approach can have on business and consumers; on politics and journalism; on social behaviors and discourse; and even on the advertising that is their bread-and-butter.

One question I ask now is whether or not this sudden, wider realization that the internet may be chockfull of garbage—and is highly vulnerable to manipulation—will change the mood of the public with regard to giving OSPs quite so much latitude to sweep a million sins under the rug of the First Amendment. Invariably, whether we’re talking about copyright infringement, counterfeit operations, or predators, criminals, and terrorists using legal platforms for illegal purposes, the general response from Google & Friends has been that these problems cannot be addressed without harming otherwise protected speech.  It’s been an effective message but largely not a true one—especially when an OSP may earn revenue from the activities of bad actors and good actors at the same time.

In recent weeks, two stories trended about harassment of Muslims—one on the New York subway and one on a Delta flight—that proved to be false.  The second of these was perpetrated by a known prankster, who creates these spectacles for his YouTube channel. Historically, the progressive view would be to defend his free speech rights in defense of YouTube itself; but creating false claims of harassment is not only not protected speech, it is purposely throwing fuel on an already dangerous fire. Is YouTube required to support this guy’s channel because of the First Amendment?  Absolutely not. No more than they are required to support terrorist recruiting videos or videos demonstrating how to hack someone’s computer or videos that infringe the rights of musicians or other creators.

In reality, web platforms do not have the kind of constraints under the First Amendment that they often claim. The First Amendment protects American citizens and entities against censorship by state actors, while a privately-owned business like a social media site can adopt nearly any Terms of Service its operators choose.  Quite simply, YouTube could decide tomorrow to become a platform exclusively for videos featuring left-handed,  yodeling, Ukrainian, sword swallowers, and the creators of the millions of videos that would consequently be removed would not be able to make a First Amendment infringement claim against the company.  To the contrary, such a suit would be in conflict with the First Amendment rights of YouTube, which happen to be the same rights that allow a newspaper to employ editorial oversight of its content.

Getting Real About Free Speech

The big question is instantly tricky, of course, because the new president-elect is the first in living memory to voice such an openly hostile relationship with the free press and free speech; and we can, therefore, imagine real policy that could become legit First Amendment challenges. As such, it’s a good time to make more sober distinctions between actual First Amendment threats and perceived ones.  Because for the last several years, the internet industry has successfully labeled just about every effort to enforce reasonable, legal protections for consumers and businesses as a threat to free speech. But mitigating tangible harm in cyberspace is not in conflict with the First Amendment any more than it is in physical space.  In fact, it is often less of an issue because the harmful actors are neither located in the U.S. nor U.S. citizens, which means they do not technically enjoy—or even necessarily respect—First Amendment protections.

Of course, the conversation is probably going to get a lot dicier now. One major flaw of the Obama administration was that it gave way too much latitude to Google and other Silicon Valley firms to shape policy in a number of areas.  If the Trump administration and Congress take meaningful action to mitigate various types of harm online, the internet industry and the “digital rights” activists will likely amp up their free speech and “open internet” rhetoric, which will play even louder against the drumbeat of the Trump administration than it did during the Obama years.

Even trickier is the possibility that the new Executive really will advocate policies that run afoul of constitutional protections; and we don’t honestly know the extent to which Silicon Valley firms, to whom we’ve volunteered so much information, will cooperate.  One way or another, it’s going to be a bumpy damn ride, and a lot of crazy shit is going to fly around the web in the coming years—a lot of it disposable, trendy nonsense that will only further divide people who might otherwise find social and political common ground.

We’ve already seen attempts by the EFF, Techdirt, and the Press Freedom Foundation to conflate Trump’s press censorship rhetoric with the News Media Alliance’s interests in protecting its own copyrights online. And we can expect more of this kind of blurry messaging in the months and years to come. I believe these parties mean well, or want to mean well, but they’re still so drunk from the tech-utopian punchbowl that they don’t notice the bowl is full of all-sorts.*

With their over-broad invocations of the First Amendment, and their love of online anonymity, the tech-utopian observers fail to acknowledge the role major online platforms have played in making our political process uglier than it was 20 years ago. We’ve managed to recreate the outrageous theatrics of the turbulent 19th century rather than the more contemplative and moderated environment we had promised ourselves for the 21st.  Rational people are suddenly noticing that we’ve entered what they’re calling a post-truth era, which sounds to my ear like the queen mother of unintended consequences for what was billed as the “information age.”

In a recent video, Robert Reich recommended that people find opportunities to talk to one another in real life, especially if they are on opposite sides of the Trump divide.  Personally, I think he has the right idea.  After five years working on a non-partisan issue like copyright, I have become friends with some extraordinarily brilliant, generous, and empathetic individuals who are traditionally conservative and whom I certainly trust to uphold the core principles of the Republic, even as we discuss different views on a wide variety of issues.

Traditionally, in physical space, people are human beings whose personal narratives  and opinions remain invisible to one another.  On social media platforms, it’s the opposite; everyone’s narrative is on display while their basic humanity remains invisible. In this sense, social media’s promise to “connect” us is a bit of a humbug. Not that I would advocate outright abstention any more than I intend to give up scotch; but the start of 2017 is probably a good time for a reality check and a freshly moderated approach to the pros and cons of these platforms.


*All-sorts was a cask full of the combined dregs from drinks left on tables in a tavern, including God-knows how much backwash. A cup of all-sorts was the cheapest drink available, and for good reason.

Posted in Digital Culture | Tagged , , | 13 Comments