Wrangling With the Facebook Problem

With Mark Zuckerberg set to testify today on Capitol Hill, and revelations last week that the Cambridge Analytica data breach is now estimated to have affected nearly 90-million users (up from around 50 million), there seems to be no shortage of theories as to how to solve the “Facebook problem.” Congress will ask Zuckerberg what Facebook’s leadership knew about the abuse of its data, when they knew it, and what the company plans to do going forward to protect consumers. Regulatory solutions have their limits, of course, and may even exacerbate a problem if legislators fail to properly grasp the underlying issues. Still, the conversation is long overdue on themes like how much data is being gathered, by which corporations, and for what purposes.

For the past few weeks, Zuckerberg has been contrite, even self-flagellating, which is admittedly a refreshing change from the standard arrogance of Silicon Valley executives. But that’s mostly theater. At best, Congress can do what Congress does, which is to tell Facebook to clean up its act, or they’ll clean it up for them. Meanwhile, there seems to be general consensus among experienced technologists and tech writers that there are limits to how much Facebook can be repaired. “We cannot have regulators trim a beast as if they were barbers and call that change,” writes Jaron Lanier in an editorial advocating the wisdom of those willing to delete Facebook from their lives.

Many responses and proposals to the Facebook fallout have been variations on the theme that the internet (as if it were a conscious being) must return to some idealized, pre-commercial set of values people seem to believe were present 20+ years ago. One might even say that these voices insisting we “make the internet great again” are guilty of a related, ahistorical folly that cannot sensibly answer what again quite means in that sentiment. For instance, a recent article by Tim Wu for The New York Times manages to criticize Facebook while alluding to a familiar refrain of cybernetic idealism at the same time. He writes …

“From the day it first sought revenue, Facebook prioritized growth over any other possible goal, maximizing the harvest of data and human attention. Its promises to investors have demanded an ever-improving ability to spy on and manipulate large populations of people. Facebook, at its core, is a surveillance machine, and to expect that to change is misplaced optimism.”

It’s not that I disagree with that description—it’s irrefutable—so much as it is perplexing to imagine that anyone ever believed there might have been another “possible goal” Facebook was going to pursue. While Zuckerberg was still in middle school, I was in meetings where major ad execs were counting on exactly the kind of consumer-specific targeting that was finally made possible when Google and Facebook figured out how to get us to share gigabytes worth of personal information without minding. The “surveillance machine” was the golden goose investors were banking on from the moment the internet became publicly accessible.

Wu’s answer to the problem is good old-fashioned competition. Asserting that the web’s tendency to foster monopolies is not a matter fate, he states …

“…the real challenge is gaining a critical mass of users. Facebook, with its 2.2 billion users, will not disappear, and it has a track record of buying or diminishing its rivals (see Instagram and Foursquare). But as Lyft is proving by stealing market share from Uber, and as Snapchat proved by taking younger audiences from Facebook, ‘network effects’ are not destiny.”

While I think Wu is right to say that a competitor could theoretically do Facebook better (e.g. be better stewards of our data), I remain skeptical that “network effects” are not inevitable when it comes to certain types of platforms. Because if New Facebook came along offering the same features, plus new consumer-protection benefits, we’re all—I mean all of us—migrating to the new platform, leaving Old Facebook to go drink with MySpace.

Of course, with 2.2 billion users and a market cap of nearly $460 billion, the company in the best position to become New Facebook is Facebook. Regardless, most of us only need one time-sucking, data-gathering, cyber-water-cooler in our lives—if we really need any at all—so I still believe certain internet monopolies are inevitable.

If mitigating data abuse by means of competition is what Wu is truly advocating, his references to Lyft and Snapchat seem to sidestep some very tough questions. For instance, a ride-hailing app/service is not remotely comparable to a social media platform. Lyft and Uber are transaction facilitators, and the consumer only benefits by having multiple players compete to provide one service—a ride—on an as-needed basis. That model is simply not analogous to the reasons users spend time and effort contributing all of the content on a social platform.

As for Snapchat, it’s true that my teenager tells me she and her friends are there because “Facebook is for old people,” but at 150 million users, the platform is hardly proof that Facebook’s network effect is not inevitable. Meanwhile, I personally think it’s anybody’s guess what this next generation of users is going to expect or want from social media as they become young adults. This includes growing bored with the whole enterprise and bailing.

Competition is a good thing, but Wu’s generalized appeal to market forces as a response to the “Facebook problem” echoes rhetorical allusions that have been made at various times in the context of copyright enforcement online. For instance in mid-2016, during hearings about the DMCA, critics of any proposal to introduce a “takedown/staydown” provision insisted that the cost of implementation would be so high that it would entrench, for instance, YouTube as the monopolistic social video platform.

This line of reasoning has always lacked integrity for completely ignoring the various market forces, including the network effect, that sustain YouTube’s dominance. I suspect Wu is making a similar error in this case, perhaps oversimplifying the challenge. As just one prosaic example, I am very much drawn to Lanier’s sentiment when he writes, “…those who have had accounts and then deleted them are true pioneers. They will see things and learn things that are new in the world.” Indeed. But if I’m being honest, I discovered his article because a friend shared it on Facebook.

YouTube Bans Gun Videos. Raises Difficult Questions.

While Austin, TX was still searching for its serial bomber, various guests on CNN were of course speculating about the assailant’s level of expertise (perhaps even formal training) due to the technical sophistication of some of the explosive devices. Cynically, I thought, “Or he has YouTube.”

For years, the internet industry, led by the major platforms, has invoked free speech as a rationale for taking a hands-off approach to the content on their sites. This has included content that is intrinsically illegal, like copyright infringing material, or content that fosters criminal activity, like demos in computer hacking, terrorist propaganda, or drug-trafficking.

Beginning with the advertisers drawing a line in the sand in early 2016, economic pressure to clean up the platforms collided a year later with political pressure as both citizens and lawmakers finally decided that the major sites can be held at least somewhat responsible for the user-generated content (UGC) on their sites. And it turns out that under all this pressure, the companies seem to be discovering capabilities they previously claimed were untenable—though it remains to be seen whether they will make decisions that are both coherent and socially beneficial.

This past week, YouTube announced that it would remove and/or bar certain firearm-related videos, which will, no doubt, be welcome news for many Americans as the groundswell demanding better gun-control gains momentum due largely to the energy of the Parkland students. But my friend and colleague, Devlin Hartline at the Center for the Protection of Intellectual Property, was quick to notice a hypocrisy in YouTube’s decision—namely, that the company would be banning at least some videos that are neither illegal nor promoting illegal activity, even implicating two constitutional rights at the same time. Here’s his tweet:

Writing as someone who is hostile toward Second Amendment maximalism, I also cannot dispute Hartline’s observation, especially as a colleague who is likewise opposed to sites still monetizing outright illegal and highly-toxic content under a blanket claim of platform neutrality. But I want to look past the complexly-heated gun issue—and even the copyright issue—to consider how this recent decision by YouTube highlights the mercurial nature of the major sites, and how this frustrates efforts toward a coherent cyber policy.

The Chameleon Sites

The major UGC platforms—e.g. Google, YouTube, Facebook, Twitter, Reddit—are chameleons with at least three bold colors, “Community,” “Commons,” and “Brand”; and one diluted color, “Corporation.” For the sake of simplicity, I’ll define these terms as follows:

Community

As private entities comparable to physical retail environments, these sites are entitled to foster any environment they want—gun-free or panda free, if they choose—and users will either join and access the platform or they won’t. In this market-based context, the website is serving as a “Community” and owes no fealty to free speech or most other constitutional rights.

Commons

Although none of these sites is a public entity, there is arguably an extent to which a limited number of platforms have become the primary public fora for communication, news, exchange of ideas and information, and commerce. As a result, the notion that these sites represent a “Commons” has long been supported by user sentiment. But this sensibility has allowed the platform owners to appeal to the first amendment as a rationale for taking almost no responsibility for managing content, including some of the paid advertising.

Brand

Presumably, YouTube’s recent gun-video ban is an example of a PR decision in which the company has decided (correctly or not) that being on the wrong side of the current trend is bad for the “Brand.” This type of decision corresponds to the “Community” model but is anathema to the “Commons” model.

Corporation

I refer to this as a diluted color because too often, we have a habit of ignoring it, of talking about “our internet” as though that notion is coextensive to the business interests of Google et al. Meanwhile, many of the invisible decisions—like Facebook authorizing a developer who ends up abusing our data—are profit-based, “Corporation” decisions that are presented to users as supporting either the “Community” or the “Commons.” (Think “fun” personality quiz you share with friends but which is actually mining your data in order to manipulate your political views.)

Rethinking Liability & Responsibility

The two statutory liability protections written in the late 1990s—Section 230 of the CDA, and Section 512 of the DMCA—were not intended to foster a “Commons” model per se. They were simply meant to shield as-yet undefined platforms from liability for the unlawful conduct of as-yet undefined types of users. But because certain major platforms today have qualities akin to a “Commons,” those liability protections are easily conflated with the speech rights of users, which galvanizes the liability shields in a manner that allows these sites to have it both ways—to monetize everything as private entities while appealing to the illusion that they are public entities.

Further exacerbating the semantic confusion, “the internet” is usually described as a single entity from a policy perspective. This has allowed the biggest, wealthiest, and most technologically capable providers to entrench a non-liability paradigm by rhetorically citing the interests of independent, small, and start-up enterprises. This is an unreasonable analysis when the total number of major platforms—the ones that could arguably be considered a “Commons”—is just a handful of entities in contrast to the roughly one-billion sites online.

Historically, those of us in the copyright fight have watched the big platforms color-shift between the rhetoric of “Community” and the rhetoric of “Commons,” depending on which identity best serves their interests in the moment. So, if new federally-mandated guidelines are called for in the wake of all the Facebook fallout—and this seems quite possible—perhaps one starting point is to address the “Community/Commons” dichotomy. Because it seems to me that new policies need not address the entire internet, when perhaps fewer than a dozen sites can reasonably be described as semi-public.

The Challenge of Semi-Public Spaces

Hartline is a law and policy expert who knows full well that YouTube doesn’t actually have to answer to the First or Second Amendment, but his response to the new gun-video ban is reflective of how most users tend to feel if otherwise legal material they post online is removed. If I post a link to this blog on Facebook, and it’s removed, I can’t sue the company for speech infringement, but that doesn’t mean my speech would not be infringed solely because Facebook owns so much of the market of potential readers.

At the same time, I would not necessarily know if the removal was done to serve “Community,” “Brand,” or “Corporation,” but for sure, the removal would belie any pretense that the platform is a “Commons.” Conversely, the more a platform like Facebook chips away at the illusion that it is a “Commons” (i.e. controls more content), the more people are likely to abandon the site in a massive self-fulfilling prophecy, taking “Community,” “Brand,” and “Corporation” down into the oubliette with MySpace.

We have to acknowledge that there truly is no historic precedent for so profound a merger of public and private interests as these particular platforms. There is some case law precedent in the analog world in which private property functions as public space and, thus, courts have limited an owners’ ability to prohibit speech. The shopping mall naturally comes to mind, but neither a mall nor any other semi-public, physical space was designed with the purpose of hosting expression, let alone the expression of millions of people in the same venue.

Addressing the “Commons/Community” dichotomy could help contextualize the intent of the existing statutory framework on site liability, which over-broadly lumps “the web” into one big policy bucket. Gaps in the legislative language have too-often allowed the major sites with the most public influence to behave as chameleons in both the courts and the court of public opinion. Thus, the major platforms have a history of rejecting and criticizing even statutorily-mandated systems to mitigate abuse—all in the name of protecting user speech, which they are not in fact obligated to protect.

So, regardless of individual views about guns and gun control, Hartline is correct to observe that this latest decision by YouTube reveals that our social and legal relationship to the major platforms remains bipolar at best. Hence, before any reasonable policy changes can emerge, it seems like the next step is to define the terms that more-accurately describe the web we have instead of the web we expected some twenty years ago.


ADDENDUM:  At the moment of publication, a colleague sent a link to this article by Eriq Gardner at Hollywood Reporter.  Court holds that YouTube is not a public forum, which it isn’t.  But that doesn’t wholly satisfy the challenge.

Facebook, Cambridge Analytica, & Our Digital Dysfunction

In late November of 2011, one of the hottest-trending, internet-related topics was the campaign to stop the SOPA/PIPA bills. In early/mid 2017, the noisiest issue was “net neutrality,” as FCC Chairman Pai made good on his promise to reverse the 2015 Open Internet Order. In both cases, the public was served volumes of emotional hyperbole, created by vested interests, used to sell variations on the theme that democracy itself was under attack. Meanwhile, our democracy was under attack, just not in a way that anybody seemed to care about very much in contrast to issues that, ironically enough, only exacerbated the underlying problem.

At roughly the same time that “digital rights” organizations—EFF, Fight for the Future, Public Knowledge, et al—began amping up the anti-SOPA rhetoric, convincing Americans that Hollywood was determined to “break the internet and stifle free speech,” Facebook was signing a consent decree with the Federal Trade Commission after the agency charged the company with deceiving “consumers by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public.”

As Wall Street eagerly anticipated the platform’s IPO, Facebook entered into the FTC agreement, which barred the company from certain privacy-breaching conduct and mandated a 20-year regime of third-party, privacy audits for compliance. But last weekend, The Guardian revealed whistleblower Christopher Wylie, a former Cambridge Analytica employee, who says he helped build a “propaganda machine” based on the data of at least 50-million American Facebook users. In response, the FTC is now investigating whether the social media platform violated that 2011 consent decree. If so, the penalty—on paper anyway—would be $40,000 times 50 million.

While nobody expects a two-trillion-dollar fine at the end of this process, the social media giant has a lot of explaining to do, and Senator’s Amy Klobuchar (D-MN) and John Kennedy (R-LA) have called for Mark Zuckerberg to personally testify before Congress. It’s going to be tense for whoever takes that seat, just based on the testimony of former Facebook operations manager Sandy Parakilas, who has spoken to The Guardian, NPR, and others. He describes an internal policy of Facebook executives choosing to not know how their data is used after being shared with developers. Parakilas even alleges the rationale that “Facebook [believed] it was in a stronger legal position if it didn’t know about the abuse that was happening.”

That’s a familiar refrain for anyone who’s been banging a head against this bulwark excuse for everything from copyright infringement to sex-trafficking—the holy trinity of internet platform defenses: We didn’t know. We can’t know. We shouldn’t know. Unfortunately for Facebook and other major platforms, what Parakilas alleges in the press is called willful blindness, which is the legal equivalent of knowing exactly what’s going on while pretending you don’t.

It is at least encouraging that the conversation is finally changing. Less than two years ago, it was tough to get much attention for a post describing how a statutory liability shield like Section 230 mutated into a presumed blanket immunity from responsibility for everything that happens on a platform. This morning, that exact narrative was a lead story on NPR in which Alina Selyukh steps listeners through the narrative, from the rationale for the statute in 1996, right up to the only amendment to the law (FOSTA), proposed in response to its misapplication as “a teflon shield” immunizing Backpage for its alleged role in promoting child sex-trafficking.

During the five years between the Facebook/FTC consent agreement and the election of 2016, the narrative persisted that an “open” internet was inherently a democracy-sustaining internet. The problem was that what “open” really meant to the major platform companies was permission to do pretty much whatever the hell they wanted. And they did.

The reason I bracketed the start of this post with the anti-SOPA campaign and the “net neturality” kerfuffle is to make the point that when it comes to cyber policy, we have consistently been instructed by the industry itself to look at the wrong issues. For instance, SOPA would have had zero negative effect on speech; and the “net neutrality” issue was entirely irrelevant to the Cambridge Analytica story, which represents a very real, cyber-age threat to the health of the Republic. If anything, these revelations demonstrate why the FTC’s authority over edge providers like Facebook and Google is far more urgent than the matter of placing ISPs under the regulatory framework of the FCC.

I’ve been strident, to say the least, in denouncing organizations like the Electronic Frontier Foundation for presuming to rally the free speech right in defense of an almost-universal, zero-liability policy for internet companies. Off the top of my head, the internet would be destroyed and speech chilled, according to EFF and friends, if …we had passed SOPA/PIPA or ratified the TPP; if we allowed internet companies to take voluntary action to stop various crimes and abuses; if we allow Backpage to face litigation or pass FOSTA; if the 2015 FCC OIO is reversed; if copyrights are ever enforced by anybody for any reason; or if we should, heaven forbid, rethink the ultra-libertarian, disrupt-culture bullshit that led anyone to believe that social media was fundamentally good for American democracy in the first place!

We’ve been swallowing a lot of nonsense about the internet for a long time, and I like to think of this period as our peyote ritual—a time to finally vomit up all these demons before we can even attempt sober consideration of what, if any, mitigating action we take next. As Taylor Lorenz describes for The Daily Beast, simply leaving Facebook isn’t so easy (unless we all bail at the same time, I guess), and there should be no reason to abandon the positive attributes—namely, legit social connection—that draw us to these sites in the first place. It’s just that we have to reconcile the fact that the reasons we’re there for ourselves are not the same reasons the platform owners wanted us there. Coming to terms with that disconnect is probably where the next iteration of cyber-policy—whether statutory or voluntary—should probably begin.