Is Code Free Speech?

I recently watched a documentary on Netflix called The Secret Rules of Modern Living: Algorithms, hosted by mathematician Marcus du Sautoy, and I would recommend this user-friendly guide for anyone who, like me, has basically sucked at math their whole lives.  In one segment, du Sautoy describes how a matching algorithm pairs compatible donor sets with patients who need kidney transplants—a problem of global complexity that could never be solved without the algorithm—and one that unquestionably saves lives. Suffice to say, code is certainly running nearly every important and trivial aspect of our lives today, so the question of whether or not code itself is speech is an acutely important one.

To cut to the chase, even that introductory paragraph suggests that code is not speech because, most of the time, we refer to code’s role as a predicate—namely in context to that which code runs.  Action is generally not protected speech but is in fact the threshold moment when various forms of “expression” can become potentially tortious behaviors.  And since, most of the time, code is a set of instructions telling a machine to perform a specific set of actions, this would seem to implicate the liability of the author or user of code for the consequences of those actions.

On his blog Uncomputing, Virginia Commonwealth University professor David Golumbia tackles the question of code as speech, placing the matter appropriately at the heart of our current political paradox in which we feel simultaneously frustrated by both corporate and government enterprise, especially when a principal responsibility of the latter is to protect us from the intemperances of the former.  Broadly, the danger inherent in the proposal that code is speech is that it too easily becomes a catch-all defense for the “automated” actions of any number of corporations, thus taking the notion of corporate personhood to a level way beyond the political-finance implications of the SCOTUS decision in Citizens United.  Golumbia writes …

“The cyberlibertarian understanding of “code is speech” contributes to a profoundly conservative assault on the rights of citizens, by depriving the state of the power to regulate and legislate against the corporations that exist only at the state’s pleasure in the first place. This is why “code is speech” has been so powerfully advocated for decades among crypto-anarchists and cypherpunks. Yet at least these groups are, for the most part, explicit about their desire to shrink governmental power and expand the power of capital. Today the view that “code is speech” is far more widespread, but it is no less noxious, than the explicit crypto-anarchist doctrine.” 

Golumbia makes it clear that he recognizes that code can have speech-like qualities and that it can, and should, be considered speech by courts when appropriate.  But he argues that the general proposition that code is fundamentally speech is “more wrong than right,” because code is more action than expression.  Where this question might get tricky for many people is with a case like Apple v FBI, in which a lot of the reportage (and Apple’s own PR) portrayed the story as one in which the corporation is protecting user privacy from government overreach.  Certainly, the privacy issue is part of the story; but as Golumbia points out, Apple presented a code-is-free-speech argument in its motion to vacate a court order this past Februrary. He explains why Apple’s position in this case was on shaky legal ground and further proposes why Apple’s argument is not only weak, but also particularly toxic to civil liberties.

Golumbia refutes Apple’s position in four parts, arguing that 1) The “code is speech” premise is not settled law as Apple asserted; 2) that even if code were speech as a settled matter, it is not true that the government can never pass laws restricting certain types of speech; 3) that code’s primary purpose is action while the First Amendment protects expression; and 4) Apple’s argument in this case is “enirely novel” with regard to its rejecting the right of the government to “compell speech” by way of ordering the company to write a code to provide access Sayed Farook’s iPhone. This last point is the part that can get clouded for some by the underlying privacy issue; but Golumbia is right, I believe, to sharply criticize the First Amendment defense posed by Apple.

Americans, especially those dismayed by Citizens United, will want to seriously consider what Golumbia is saying in this case.  Apple’s “compelled speech” defense asserts that the corporation not only has exactly the same free speech rights as the individual citizen; but in a code-driven world, the corporation may be shielded against any liability stemming from any number of actions. As Golumbia makes clear, the government historically compels corporations to “speak” all the time and also restricts corporate speech in a variety of ways that serve the public interest. A citizen is free to tell his friends on Facebook, “This soda cured my cold,” if he really wants to; but if Coke makes the same claim, they’re pretty screwed. And for good reason.

Of course, Apple’s argument remains hypothetical since the FBI did its own cracking, and the dispute between the computer-maker and the agency will no longer proceed through the courts.  But Golumbia is absolutely right when he writes, “The effect of embracing ‘code is speech’ is to say that governments cannot regulate what corporations do. That might seem like hyperbole, but it is 100% on board with the Silicon Valley view of the world, the overt anarcho-capitalism that many of its leaders embrace, and the covert cyberlibertarianism that so many more accept without fully understanding its consequences.”

With each step into the 21st century, more aspects of our lives become unavoidably dependent upon, or associated with, some form of code.  This underlying reality is the reason we should be critical of the view that organizations like the EFF promote when they perceive a million daily micro-aggressions against “speech” in cyberspace.  The idea that every transaction online is inherently speech—because code itself is speech—is most galling when it pretends to be a defense of individual civil liberties.  Because in practice, it is an argument that—to paraphrase Jaron Lanier—cannot help but cede political and economic power to the companies with the biggest computers.  As this would completely subvert the reason why freedom of speech is articulated in the First Amendment to begin with, it is a legal question of considerable magnitude.


ADDENDUM:  It is also worth noting that there is an extent to which words like code and algorithm become a means of separating the functions of computers from the decisions of human beings.  This rhetoric is often invoked when, for instance, OSPs seek to avoid responsibility for various actions resulting from their technologies.  Of course, if code is not an expression of human choice, then it is certainly not speech; but because it is an expression of human choice that usually has consequences in the physical world, then it is speech that implicates reasonable limits. (Thanks to a colleague for raising this point.)

Apple v FBI Doesn’t Address the Real Challenge

In a story that appeared Monday in The Guardian, writer Danny Yadron projects a hypothetical, but not technically unrealistic, future scenario in which we imagine our driverless car hijacks a run to the grocery store, transporting us instead to a police station because face-recognition software resulted in our being wanted for questioning in an investigation.  The eerie itself, Yadron reports, comes from engineer and former US government privacy policy consultant  Ashkan Soltani, who warns that this kind of circumstance could become reality if Apple loses its fight with the FBI over whether or not it must write code to circumvent the security system of San Bernardino terrorist Sayed Farook’s iPhone.

Of course, it is not farfetched to anticipate new forms of abuse in our increasingly networked lives, and it is prudent to seek remedies in policy and caselaw precedent that may preempt such scenarios; but I’m not sure that a ruling one way or another in the Apple case would be quite so prophetic as some observers suggest.  In fact, assuming we do become increasingly networked and adapt to the holistic Internet of Things as effortlessly and rapidly as the technologists expect us to, the matter of protecting civil liberties in this future society seems increasingly paradoxical. After all, government agencies are supposed to be our hedge against the excesses of private enterprise that might invade our privacy or run afoul of anti-trust restrictions; or government agencies are meant to protect us from criminal abuse of the same systems. But do we simultaneously expect private enterprise or “white hat” hackers to protect us from the overreach of government?

Yadron’s article addresses several aspects of this challenge, citing competing points of view from the policy, law-enforcement, and technology sectors.  And there are no easy answers.  For one thing, the current Apple case involving the cell phone of a known terrorist and a warrant issued by court order may be too specific to beg the broader question as to who controls the code that runs our day-to-day lives.  As of yesterday morning, the FBI announced that they may be able to crack the iPhone without Apple’s help; but even if the presently-suspended legal case were to proceed, Yadron notes that the court could rule in the FBI’s favor in this one extraordinary instance while remaining silent on the much larger question.

My own assumption is that, with regard to cases involving law enforcement, the public is still served by the courts and due process and that new legislation may not be necessary to adapt to new technology. For instance, as dramatic as the futuristic arrest by driverless car scenario may sound, it would be an illegal detention according to existing statute, at least the way Soltani imagines it.  But if similar automation were one day used to capture a wanted criminal based on evidence and an arrest warrant, due process would not necessarily suffer just because the arrest would be partly effected via code. Particularly as we anticipate an inevitable increase in automated law enforcement practices, if we cannot continue to invest faith and power in judicial oversight, we’re basically hosed.

With regard to living day-to-day in a networked society, though, we probably have to imagine scenarios more subtle than the automated arrest by our own robot vehicles—like undetectable invasions that track habits and behaviors, all organized into data that could be used to manipulate or determine opportunities for jobs, education, healthcare, insurance, credit, and so on.  The opportunities these encroachments provide for mischief by corporate, criminal, or government entities are indeed new territory—much more so it seems than the Apple/FBI case—and could easily demand new legislation.

Yadron quotes science fiction writer Bruce Bethke, who gives examples like your cellphone notifying your health insurance provider when you enter a tobacco shop. Users of Google Now on their Android phones have opted into a “service” that cross-references search, GMail content, location, etc. to anticipate their wants and needs and then provides suggestions via  Cards.  Why anyone finds this more helpful than creepy is a mystery to me. All I imagine is Montag’s doe-eyed wife, subservient to the system in Fahrenheit 451, when I contemplate the capacity for this technology to push behaviors, including political or social beliefs. Even at its most benign, it just sounds annoying, like they should have called it Google Nag instead of Google Now.

Meanwhile, we should expect to see a growing market for anti-surveillance products and services for what can only become an increasingly paranoid world in which we are voluntarily spying on ourselves.  As AlterNet reports, English designer Adam Harvey is making wardrobe that will shield against thermal imaging, and he’s demonstrating makeup techniques that will confound face-recognition software. Such efforts are endorsed by organizations like the Electronic Frontier Foundation and Fight for the Future because the presumed abusers of surveillance technology will be government agencies, but what about the more subtle private-enterprise promises of the networked society?

Will we live in “smart” homes enjoying their many conveniences but always sure to wear our cloaking PJs?   Will we need to buy and vigilantly update an array of countermeasures to protect our privacy inside our own walls because now they really do have ears?  As we interact with our own homes and vehicles and with one another, we will constantly be sending data to somebody’s servers somewhere.  We are already doing this, though not as holistically as the Internet of Things implies.  How do we write legislation that protects against corporate, government, or criminal abuse of these data and systems?  Or more immediately, whom can we expect to represent civil liberties in this context?

Because I think organizations like EFF and Fight for the Future are often haggling over small potatoes while getting nowhere near the larger question.  These digital rights activists—who are dependent upon Silicon Valley support by the way—make a lot of noise about our “right” to jailbreak these disposable, hand-held devices—something very few of us will ever bother to do—without coming close to having the real discussion about whether or not public agency oversight will be able to protect consumers in a fully-networked future.  When too much of the emphasis on anti-surveillance assumes “government” will be the only abuser, we forget that there is a profit motive in all this monitoring by private enterprise.  Meanwhile, as Google’s presence in Washington increases considerably, are legislators and executive branch officials getting advice from Google on how to protect us from Google?  Because one way or another, we seem to be voluntarily becoming a surveillance society, and I wonder if there will ultimately be an opt out button.