Apple v FBI Doesn’t Address the Real Challenge

In a story that appeared Monday in The Guardian, writer Danny Yadron projects a hypothetical, but not technically unrealistic, future scenario in which we imagine our driverless car hijacks a run to the grocery store, transporting us instead to a police station because face-recognition software resulted in our being wanted for questioning in an investigation.  The eerie itself, Yadron reports, comes from engineer and former US government privacy policy consultant  Ashkan Soltani, who warns that this kind of circumstance could become reality if Apple loses its fight with the FBI over whether or not it must write code to circumvent the security system of San Bernardino terrorist Sayed Farook’s iPhone.

Of course, it is not farfetched to anticipate new forms of abuse in our increasingly networked lives, and it is prudent to seek remedies in policy and caselaw precedent that may preempt such scenarios; but I’m not sure that a ruling one way or another in the Apple case would be quite so prophetic as some observers suggest.  In fact, assuming we do become increasingly networked and adapt to the holistic Internet of Things as effortlessly and rapidly as the technologists expect us to, the matter of protecting civil liberties in this future society seems increasingly paradoxical. After all, government agencies are supposed to be our hedge against the excesses of private enterprise that might invade our privacy or run afoul of anti-trust restrictions; or government agencies are meant to protect us from criminal abuse of the same systems. But do we simultaneously expect private enterprise or “white hat” hackers to protect us from the overreach of government?

Yadron’s article addresses several aspects of this challenge, citing competing points of view from the policy, law-enforcement, and technology sectors.  And there are no easy answers.  For one thing, the current Apple case involving the cell phone of a known terrorist and a warrant issued by court order may be too specific to beg the broader question as to who controls the code that runs our day-to-day lives.  As of yesterday morning, the FBI announced that they may be able to crack the iPhone without Apple’s help; but even if the presently-suspended legal case were to proceed, Yadron notes that the court could rule in the FBI’s favor in this one extraordinary instance while remaining silent on the much larger question.

My own assumption is that, with regard to cases involving law enforcement, the public is still served by the courts and due process and that new legislation may not be necessary to adapt to new technology. For instance, as dramatic as the futuristic arrest by driverless car scenario may sound, it would be an illegal detention according to existing statute, at least the way Soltani imagines it.  But if similar automation were one day used to capture a wanted criminal based on evidence and an arrest warrant, due process would not necessarily suffer just because the arrest would be partly effected via code. Particularly as we anticipate an inevitable increase in automated law enforcement practices, if we cannot continue to invest faith and power in judicial oversight, we’re basically hosed.

With regard to living day-to-day in a networked society, though, we probably have to imagine scenarios more subtle than the automated arrest by our own robot vehicles—like undetectable invasions that track habits and behaviors, all organized into data that could be used to manipulate or determine opportunities for jobs, education, healthcare, insurance, credit, and so on.  The opportunities these encroachments provide for mischief by corporate, criminal, or government entities are indeed new territory—much more so it seems than the Apple/FBI case—and could easily demand new legislation.

Yadron quotes science fiction writer Bruce Bethke, who gives examples like your cellphone notifying your health insurance provider when you enter a tobacco shop. Users of Google Now on their Android phones have opted into a “service” that cross-references search, GMail content, location, etc. to anticipate their wants and needs and then provides suggestions via  Cards.  Why anyone finds this more helpful than creepy is a mystery to me. All I imagine is Montag’s doe-eyed wife, subservient to the system in Fahrenheit 451, when I contemplate the capacity for this technology to push behaviors, including political or social beliefs. Even at its most benign, it just sounds annoying, like they should have called it Google Nag instead of Google Now.

Meanwhile, we should expect to see a growing market for anti-surveillance products and services for what can only become an increasingly paranoid world in which we are voluntarily spying on ourselves.  As AlterNet reports, English designer Adam Harvey is making wardrobe that will shield against thermal imaging, and he’s demonstrating makeup techniques that will confound face-recognition software. Such efforts are endorsed by organizations like the Electronic Frontier Foundation and Fight for the Future because the presumed abusers of surveillance technology will be government agencies, but what about the more subtle private-enterprise promises of the networked society?

Will we live in “smart” homes enjoying their many conveniences but always sure to wear our cloaking PJs?   Will we need to buy and vigilantly update an array of countermeasures to protect our privacy inside our own walls because now they really do have ears?  As we interact with our own homes and vehicles and with one another, we will constantly be sending data to somebody’s servers somewhere.  We are already doing this, though not as holistically as the Internet of Things implies.  How do we write legislation that protects against corporate, government, or criminal abuse of these data and systems?  Or more immediately, whom can we expect to represent civil liberties in this context?

Because I think organizations like EFF and Fight for the Future are often haggling over small potatoes while getting nowhere near the larger question.  These digital rights activists—who are dependent upon Silicon Valley support by the way—make a lot of noise about our “right” to jailbreak these disposable, hand-held devices—something very few of us will ever bother to do—without coming close to having the real discussion about whether or not public agency oversight will be able to protect consumers in a fully-networked future.  When too much of the emphasis on anti-surveillance assumes “government” will be the only abuser, we forget that there is a profit motive in all this monitoring by private enterprise.  Meanwhile, as Google’s presence in Washington increases considerably, are legislators and executive branch officials getting advice from Google on how to protect us from Google?  Because one way or another, we seem to be voluntarily becoming a surveillance society, and I wonder if there will ultimately be an opt out button.

© 2016, David Newhoff. All rights reserved.

Follow IOM on social media:

3 comments

  • The only “help” being requested of Apple was a means of disabling the auto erase function that would wipe all the data after a certain number of password attempts. The EFF has always been primarily about freedom to steal software because cheap or free software sells personal computers not to mention Google ads. They only address issues that might harm the perceived value of silly-con valley stocks. They say nothing about Google walking all over our privacy.

  • Pingback: Why Does Google Love Piracy? - The Illusion of MoreThe Illusion of More

  • Tim Cook is the ultimate hypocrite…Considering the amount of data mining that Apple does (with all of its products), and the intrusiveness of iTunes, he ought to just shoot himself. Does he really think that anyone who knows anything finds him to be the hero he’s putting himself out there as?

Join the discussion.

This site uses Akismet to reduce spam. Learn how your comment data is processed.