AI & Ethical Determinism

Photo by moipokupkigmailcom.

As artificial intelligence (AI) moves from the realm of science fiction to everyday science, leading  technologists and scientists are asking themselves questions about how to write an ethical code. The most widely reported ethical dilemma involves the self-driving automobile and what’s known as the Trolley Problem.  This hypothetical challenge asks whether or not you would make the decision to divert a speeding train from one track to another knowing that doing so will kill one person but save the lives of several others.

This problem is transposed to the world of driverless vehicles and how a car’s AI should make split-second, life-and-death decisions. It’s a challenge that doesn’t just raise technological or ethical questions, but psychological ones as well.  Because the ways in which we cope with tragedy—or the anticipation of potential tragedy—in modern, developed society does not generally encompass the kind of determinism implied by AI.

Many cope with tragic accidents through faith—a belief that there is a deity with a plan, even if that plan cannot be known.  Those of us who are not religious cope without faith in a plan—by making peace with the fact that chaos and human fallibility can produce terrible outcomes.  In either case, there is a degree of comfort in the uncertainty—not comfort that lessens the pain of loss per se, but comfort that enables us to rationalize terrible events or to step outside the door without living in abject fear.  The uncertainty coupled with probability, and maintaining a measure of control, allows us to confidently get into our cars and drive around without expecting to be wiped out.  Remove a factor, for instance the measure of control, and this likely explains why more people are anxious about flying than riding in cars despite the statistics proving that the opposite should be true.

In this regard, when the unforeseen event occurs—brakes fail, an obstacle suddenly appears in the road, etc.—the outcome of the split-second reaction of a human driver is arguably more the result of chance than of any kind of rational decision-making. Even in relatively common examples, drivers are told that they should almost never swerve to avoid wildlife running across the road and risk hitting an oncoming car or slamming into a tree and killing themselves rather than hit a squirrel. But that instinct to avoid is strong and gets stronger when the animal that suddenly appears is a stray cat or dog.  The point is that whatever the outcome—whether a flattened squirrel or several dead motorists—the whole series of events, including the driver’s reaction, can be chalked up to a degree of uncertainty, and in that uncertainty lies many of our psychological coping mechanisms.

But what happens when humans pre-determine the outcome of certain ethical dilemmas and encode these into machines that we then grant authority to make these decisions?  In the simple case cited above, the squirrel is killed and all humans live, but what about a split-second decision that will result in the death of a passenger versus the deaths of a mother and baby crossing the road?  In the Summer of 2016, MIT researchers, grappling with exactly these types of questions, launched a website called Moral Machine which asks users to make a set of lose-lose decisions in various hypothetical traffic scenarios in which some parties face certain death.  Anyone can take the “test”, and the site will reveal how you “score” relative to the ethical decisions made by others.

Of course, the Moral Machine tests present the user with information that a car’s AI would, in principle, never know—like the fact that some of the potential victims are criminals.  But in a more likely scenario, age is a factor in some of the scenarios, a condition that strikes me as being more credible—that a vehicle might know it’s carrying a septuagenarian couple and may, therefore, decide that it is more ethical to kill them rather than a young family. The senior couple might even make the same selfless decision themselves, but such calculations don’t really occur when a human operator is reacting to an emergency faster than he can think.

What’s eerie about some of these Moral Machine tests is the implication that the data set used to enable an AI to make ethical decisions could theoretically include more than mere numbers (i.e. that the machine would simply default to save more lives than it takes).  Age could be a factor, but what about net worth or relative “value” to society?  Does the AI wipe out a whole busload of kids to save one physicist or surgeon or even a Kardashian? What about race or sexual orientation? This then begs the question of whether or not these pre-determined decisions are public knowledge or trade secrets, both of which present huge and unprecedented moral dilemmas.

In this regard, an article that appeared just a few days ago tells us that the next generation of self-driving cars from Mercedes-Benz will be programmed to save the passengers regardless of circumstances.  On the one hand, this is an algorithmic variation on the theme that the privileged class enjoy a priority lane on the right to life ahead of everyone else; but there is also something to be said for Mercedes choosing not to become trapped in the moral miasma of programming an ethical AI. Perhaps if all vehicles are required by law to default to a single directive like try to save the passengers, then this would approximate the instinctive but fallible reactions of human drivers and still allow uncertainty to play a role, thus absolving engineers of the responsibility to “play God.”  At least until the AI becomes self aware and begins to make such decisions on its own.

After all, it’s hard not to notice the dystopian implications of a man-made, ethical determinism when we remove the element of chance and cede authority to carry out life-and-death decisions to machines.  When we remove the psychological buffer provided by chance, fate, God’s will, etc., then tragic events naturally beg explanation and, therefore, an instinct to assign blame. This of course raises the companion question about those who would inevitably try to game the system, people who would “jailbreak” their vehicles to override any code that might not favor them as the chosen survivors of an accident.  Suddenly, this places the civil libertarians who complain about the “right” to tinker with technological property on the wrong side of that argument insofar as the theoretical “greater good” is concerned.

The ethical AI question also becomes another factor leading to the conclusion that autonomous vehicles might not be private property for very long. Rationally, to have an AI shuttling all of us hither and yon, the system would have to be level in order for it to be remotely ethical, suggesting that the private property models from Mercedes or Tesla or Ford are merely stepping stones toward a public system, or a highly regulated one. But these outcomes are not what manufacturers or leading data companies investing in this future are going have in mind.

This is one reason I agree with President Obama, when he said in a recent Wired interview with Joi Ito conducted by Scott Dadich, that it is essential that public funding play a role in the development of AI.  “…part of what we’re gonna have to understand is that if we want the values of a diverse community represented in these breakthrough technologies, then government funding has to be a part of it. And if government is not part of financing it, then all these issues that Joi has raised about the values embedded in these technologies end up being potentially lost or at least not properly debated,” said Obama.

Of course, the president is referring to developing an ethical AI beyond just vehicles, and his point is well taken.  The sci-fi future of AI is already here.  But the questions as to what values drive the decision-making are just barely being asked in the public debate. Meanwhile, the corporate rhetoric of “disruption” has already absolved many sins in the areas of privacy and intellectual property infringement.  Or as Sam Kriss put it so well in his excellent article for The Atlantic:  “Silicon Valley works by solving problems that hadn’t heretofore existed; its culture is pathologically fixated on the notion of ‘disruption.’ Tech products no longer feel like something offered to the public, but something imposed: The great visionary looks at the way everyone is doing something, and decides, single-handedly, to change it.”

NPR reported in late September that computer scientists from the major tech giants—IBM, Apple, Amazon, Microsoft, Facebook, & Google formed The Partnership on Artificial Intelligence to Benefit People and Society.  One of the goals of the partnership is to develop standards for best practices in the field of AI, including tackling ethical questions like the Trolley Problem for vehicles.  But it is essential that the public interest be represented in the development of these technologies, and as much as I may fault the Obama administration for being too Googley in various policy areas, I also credit the president himself for apparently thinking deeply about questions like how we develop an ethical AI. At the present rate of development and investment, let’s hope the outgoing president is not the last public representative to keep this conversation in the foreground.

Right to Jailbreak Auto Software May be a Moot Point

Last month, a good friend of mine — an attorney who works in intellectual property and believes in its value — shared a brief post from BoingBoing by Cory Doctorow criticizing efforts by the auto industry to enforce the copyrights on software, now intrinsic in any contemporary vehicle, in order to limit consumer choice in the marketplace. In this case, Doctorow calls out GM for its efforts to stop the Copyright Office from granting an exception to the DMCA that would allow owners of GM vehicles to jailbreak the software, thus enabling them to perform their own diagnostics and maintenance at home, or to use non-GM-authorized service providers and parts. Such restrictions, increasingly asserted by automakers, are seen as a prime example of industry abusing intellectual property rights as protectionist measures to restrict liberty and limit competition.

Certainly, my pro-IP attorney friend shared the story with the comment that she feels automakers are overreaching; and, in this regard, she is consistent with most copyright advocates as well as the courts, which have generally favored competition when other industries have tried to use DMCA anti-circumvention measures to control the market.  So, in this particular moment in history, the complaint is understandable; though the conversation itself, I believe, raises a much broader and more interesting subject beyond contemporary copyright, begging the question as to exactly what kind of future it is we think we’re building?

In fact, Cory Doctorow himself is one of the more prominent voices presently insisting that those of us who still place considerable value on copyright and IP in general are anachronisms. We are told that we are metaphorically “clinging to the buggy whip industry while automobiles pass us by.”  But in his criticism of restrictions on jailbreaking contemporary cars, I have to ask exactly who’s clinging to the past here? Because the more our automobiles become sophisticated computers on wheels, and most especially if we are serious about migrating toward a future of driverless (or diver-optional) cars, it seems to me these complaints about automakers’ application of copyrights in this case are clinging to rapidly fading concepts of ownership that will naturally continue to change if we are to take the futurists and tech-utopians seriously.

In a future system in which an automobile becomes just one dynamic node in a vast traffic grid that is holistically maintained by software — because that is the only way it could work — not only will individuals not be allowed to service their own cars, but the very idea that a car may be “owned,” as we presently define that term, could be scrapped along with the last internal combustion engine.  As Jaron Lanier suggests in Who Owns the Future, a driverless paradigm may be brought about by public mandate if it can be demonstrated that automobile fatalities and serious injuries can be reduced by a substantial margin. And more recently, articles have been appearing that predict we’ll at least see driverless taxis within a decade or so, while others have examined the environmental benefits of a driverless future.  Combine these factors with certain market realities — like the fact that American millennials will be the first generation to earn less than previous generations and that they concurrently reveal a general comfort with “sharing economy” concepts that erase traditional notions of ownership —  and our long-standing relationship with automobiles as symbols of personal freedom could give way to a driverless future that would necessitate something like a public/private model of personal transportation.

I know this projection is probably unappealing to many Americans today because we do have a unique relationship with our cars, our big open roads, our ability to be our own mechanics, and our sense of liberty.  But that’s exactly why I think the larger question as to what kind of technological future we’re building is far more intriguing than any momentary complaints about an automaker using copyrights to restrict traditional market freedoms in maintenance.  Sure, we can, and probably should, demand the right to jailbreak cars on principle right now, but that principle could become moot faster than we think.

It seems reasonable to assume that if we are to embrace The Internet of Things, that as ordinary functions of our lives are made easier, safer, cheaper, or faster by networked systems, the more the concept of “owning” many types of property is likely to change. In many cases, these predictions seem to imply a return to older models based on monopolistic, semi-regulated, industries.  When I was a kid, nobody owned a telephone. A household got an account with the one phone carrier that served the community, and then leased however many phones as needed from the same company, much as we still lease cable boxes to this day.  So, if we progress toward a future of smart, driverless cars and smart homes, at what point do regulations like building code, consumer and environmental protections, or safety regulations merge with intellectual property to become an intertwined body of law that inherently limits our present sense of personal liberty vis a vis those items we presently call “our stuff?”

Consider the flap from libertarians over the CF lightbulb years ago, or the overreach by Keurig in its attempt to use IP to thwart the sale of off-brand cups for its coffee makers; and then imagine how many components of your home might one day be part of a complex, data-driven network that only works properly if all the compatible units are precisely installed and maintained.  “Your” house  would become  just one little Christmas light on a vast strand of homes and businesses that society cannot afford to let go out. How could such a highly automated and integrated system function without limiting individual choice and potentially threatening competition in the market?  Of course, our adoption of holistic, networked integration of daily life is either unlikely, or it depends on such profound social changes that it is a bit hard to fathom.  But as long as we’re talking about broader principles, let’s talk about those with regard to the technologies and models we’re being told are the future rather than merely react to momentary misapplications in models we believe to be rapidly fading into the past.

By and large, I assume we enjoy the benefits of safety and convenience that come with computer-assisted cars, but now we’re on the leading edge of the question as to whether or not we ultimately want computer-centric cars.  If so, it is probably unavoidable that we must  recalibrate our definitions of ownership if we are going to allow the machines to drive us rather than the other way around.  Thus, I find it a strange contradiction to highlight every example, at this moment in history, that reveals copyright to be a restraint on personal liberty when it is simultaneously claimed to be a restraint on innovation itself.  Because many a predicted innovation may ultimately limit traditional liberties by virtue of paradigmatic change, resulting in fewer consumer choices in various sectors.  A driverless-vehicle paradigm implies a model that is fundamentally communal, which doesn’t have to be a bad thing for society per se, but it is certainly anathema to the American sense of personal liberty with regard to “our” cars.   As such Doctorow’s complaint, while perhaps valid in this moment, appears to rust a little on the page almost as quickly as it can be read.