As artificial intelligence (AI) moves from the realm of science fiction to everyday science, leading technologists and scientists are asking themselves questions about how to write an ethical code. The most widely reported ethical dilemma involves the self-driving automobile and what’s known as the Trolley Problem. This hypothetical challenge asks whether or not you would make the decision to divert a speeding train from one track to another knowing that doing so will kill one person but save the lives of several others.
This problem is transposed to the world of driverless vehicles and how a car’s AI should make split-second, life-and-death decisions. It’s a challenge that doesn’t just raise technological or ethical questions, but psychological ones as well. Because the ways in which we cope with tragedy—or the anticipation of potential tragedy—in modern, developed society does not generally encompass the kind of determinism implied by AI.
Many cope with tragic accidents through faith—a belief that there is a deity with a plan, even if that plan cannot be known. Those of us who are not religious cope without faith in a plan—by making peace with the fact that chaos and human fallibility can produce terrible outcomes. In either case, there is a degree of comfort in the uncertainty—not comfort that lessens the pain of loss per se, but comfort that enables us to rationalize terrible events or to step outside the door without living in abject fear. The uncertainty coupled with probability, and maintaining a measure of control, allows us to confidently get into our cars and drive around without expecting to be wiped out. Remove a factor, for instance the measure of control, and this likely explains why more people are anxious about flying than riding in cars despite the statistics proving that the opposite should be true.
In this regard, when the unforeseen event occurs—brakes fail, an obstacle suddenly appears in the road, etc.—the outcome of the split-second reaction of a human driver is arguably more the result of chance than of any kind of rational decision-making. Even in relatively common examples, drivers are told that they should almost never swerve to avoid wildlife running across the road and risk hitting an oncoming car or slamming into a tree and killing themselves rather than hit a squirrel. But that instinct to avoid is strong and gets stronger when the animal that suddenly appears is a stray cat or dog. The point is that whatever the outcome—whether a flattened squirrel or several dead motorists—the whole series of events, including the driver’s reaction, can be chalked up to a degree of uncertainty, and in that uncertainty lies many of our psychological coping mechanisms.
But what happens when humans pre-determine the outcome of certain ethical dilemmas and encode these into machines that we then grant authority to make these decisions? In the simple case cited above, the squirrel is killed and all humans live, but what about a split-second decision that will result in the death of a passenger versus the deaths of a mother and baby crossing the road? In the Summer of 2016, MIT researchers, grappling with exactly these types of questions, launched a website called Moral Machine which asks users to make a set of lose-lose decisions in various hypothetical traffic scenarios in which some parties face certain death. Anyone can take the “test”, and the site will reveal how you “score” relative to the ethical decisions made by others.
Of course, the Moral Machine tests present the user with information that a car’s AI would, in principle, never know—like the fact that some of the potential victims are criminals. But in a more likely scenario, age is a factor in some of the scenarios, a condition that strikes me as being more credible—that a vehicle might know it’s carrying a septuagenarian couple and may, therefore, decide that it is more ethical to kill them rather than a young family. The senior couple might even make the same selfless decision themselves, but such calculations don’t really occur when a human operator is reacting to an emergency faster than he can think.
What’s eerie about some of these Moral Machine tests is the implication that the data set used to enable an AI to make ethical decisions could theoretically include more than mere numbers (i.e. that the machine would simply default to save more lives than it takes). Age could be a factor, but what about net worth or relative “value” to society? Does the AI wipe out a whole busload of kids to save one physicist or surgeon or even a Kardashian? What about race or sexual orientation? This then begs the question of whether or not these pre-determined decisions are public knowledge or trade secrets, both of which present huge and unprecedented moral dilemmas.
In this regard, an article that appeared just a few days ago tells us that the next generation of self-driving cars from Mercedes-Benz will be programmed to save the passengers regardless of circumstances. On the one hand, this is an algorithmic variation on the theme that the privileged class enjoy a priority lane on the right to life ahead of everyone else; but there is also something to be said for Mercedes choosing not to become trapped in the moral miasma of programming an ethical AI. Perhaps if all vehicles are required by law to default to a single directive like try to save the passengers, then this would approximate the instinctive but fallible reactions of human drivers and still allow uncertainty to play a role, thus absolving engineers of the responsibility to “play God.” At least until the AI becomes self aware and begins to make such decisions on its own.
After all, it’s hard not to notice the dystopian implications of a man-made, ethical determinism when we remove the element of chance and cede authority to carry out life-and-death decisions to machines. When we remove the psychological buffer provided by chance, fate, God’s will, etc., then tragic events naturally beg explanation and, therefore, an instinct to assign blame. This of course raises the companion question about those who would inevitably try to game the system, people who would “jailbreak” their vehicles to override any code that might not favor them as the chosen survivors of an accident. Suddenly, this places the civil libertarians who complain about the “right” to tinker with technological property on the wrong side of that argument insofar as the theoretical “greater good” is concerned.
The ethical AI question also becomes another factor leading to the conclusion that autonomous vehicles might not be private property for very long. Rationally, to have an AI shuttling all of us hither and yon, the system would have to be level in order for it to be remotely ethical, suggesting that the private property models from Mercedes or Tesla or Ford are merely stepping stones toward a public system, or a highly regulated one. But these outcomes are not what manufacturers or leading data companies investing in this future are going have in mind.
This is one reason I agree with President Obama, when he said in a recent Wired interview with Joi Ito conducted by Scott Dadich, that it is essential that public funding play a role in the development of AI. “…part of what we’re gonna have to understand is that if we want the values of a diverse community represented in these breakthrough technologies, then government funding has to be a part of it. And if government is not part of financing it, then all these issues that Joi has raised about the values embedded in these technologies end up being potentially lost or at least not properly debated,” said Obama.
Of course, the president is referring to developing an ethical AI beyond just vehicles, and his point is well taken. The sci-fi future of AI is already here. But the questions as to what values drive the decision-making are just barely being asked in the public debate. Meanwhile, the corporate rhetoric of “disruption” has already absolved many sins in the areas of privacy and intellectual property infringement. Or as Sam Kriss put it so well in his excellent article for The Atlantic: “Silicon Valley works by solving problems that hadn’t heretofore existed; its culture is pathologically fixated on the notion of ‘disruption.’ Tech products no longer feel like something offered to the public, but something imposed: The great visionary looks at the way everyone is doing something, and decides, single-handedly, to change it.”
NPR reported in late September that computer scientists from the major tech giants—IBM, Apple, Amazon, Microsoft, Facebook, & Google formed The Partnership on Artificial Intelligence to Benefit People and Society. One of the goals of the partnership is to develop standards for best practices in the field of AI, including tackling ethical questions like the Trolley Problem for vehicles. But it is essential that the public interest be represented in the development of these technologies, and as much as I may fault the Obama administration for being too Googley in various policy areas, I also credit the president himself for apparently thinking deeply about questions like how we develop an ethical AI. At the present rate of development and investment, let’s hope the outgoing president is not the last public representative to keep this conversation in the foreground.
Of course in a networked system one would indeed be able to tell that the passenger in one self-drive car was Archbishop Fénelon and in the other car the programmers mother.
“..that a vehicle might know it’s carrying a septuagenarian couple and may, therefore, decide that it is more ethical to kill them rather than a young family…”
Oh, yeah? But what if the septuagenarian couple or single person is supporting family members still?
Or if the septuagenarian is a world reknown scientist on the verge of a cure for cancer?
Or if the septuagenarian has already spent enough labor and money and damned if they should give up the ghost as if all those years mean nothing and they do not have the right to every last minute they can scrape together.
Thanks for commenting, LT, but you’re making my point. Can we tolerate a predetermined outcome programmed into the machine as to who lives or dies? I think when we remove the universal fairness of uncertainty, it becomes quite problematic. Fate and chance are equal opportunity killers.