Get AI Wrong and There Will Be Nothing to Forgive

We all know the mantra that says it’s better to ask forgiveness than permission. According to Quote Investigator, the earliest published version of this sentiment appeared in 1846, but QI’s editors believe the notion is older than that and cannot be attributed to any one source. Whatever its derivation or contexts in which it has been used over many decades, the phrase is presently associated with Silicon Valley and the heedless “move fast and break things” approach to technological development.

I was hardly alone in noticing that Ocean Gate CEO Stockton Rush tech-broed the design of his Titan submersible, dismissing warnings and safety regulations as barriers to innovation (one of Silicon Valley’s favorite refrains about pesky rules). Moreover, because the vessel imploded and the passengers were apparently killed before they knew what happened, Titan’s fate seems an apt harbinger of the technological singularity—its analogy to crossing the event horizon of a black hole conjuring an uncomfortable squeezing parallel to death by implosion.

For anyone unfamiliar with the term technological singularity, it is often described as a threshold in AI development when computers “wake up” and their intelligence surpasses human intelligence. The event horizon analog, credited to sci-fi author Vernor Vinge, describes two principles: 1) that we have no way to predict what happens beyond the capacity of human intelligence; and 2) that we won’t know when we’ve crossed the horizon.

Of course, we need not anthropomorphize computers or manifest the many fictions about sentient machines to approach the horizon, and some experts believe we are already inside the gravitational pull of singularity. For instance, in a May editorial for The Hill, McGill University scholar J. Mauricio Gaona, asserting that singularity is “already underway,” states …

The possibility of soon reaching a point of singularity is often downplayed by those who benefit most from its development, arguing that AI has been designed solely to serve humanity and make humans more productive.

Such a proposition, however, has two structural flaws. First, singularity should not be viewed as a specific moment in time but as a process that, in many areas, has already started. Second, developing gradual independence of machines while fostering human dependence through their daily use will, in fact, produce the opposite result: more intelligent machines and less intelligent humans.  

Gaona notes that the commercial potential of AI in medicine, finance, transportation et al. will require unsupervised learning algorithms (i.e., machines that effectively “train” themselves) and that granting even limited autonomy to these systems means we have already stepped over the threshold toward singularity. Further, he argues, once AI meets quantum computing, then “Crossing the line between basic optimization and exponential optimization of unsupervised learning algorithms is a point of no return that will inexorably lead to AI singularity.” Not to worry, though, the U.S. Congress is on the job.

On June 21, Senator Schumer, speaking at the Center for Strategic and International Studies (CSIS), discussed the SAFE Innovation Framework for Artificial Intelligence. “Change at such blistering speed may seem frightening to some—but if applied correctly, AI promises to transform life on Earth for the better. It will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds, and ensure peace. But there are real dangers too: job displacement, misinformation, a new age of weaponry, and the risk of being unable to manage this technology altogether,” Sen. Schumer stated. The SAFE framework is outlined as follows:

  • Security. Necessary to protect national security for the U.S. and economic security for residents whose jobs may be displaced by automation.
  • Accountability. The providers of AI systems must deploy these systems in a transparent and responsible way. They must remain responsible for violations of the protections ultimately put in place by promoting misinformation, violating intellectual property rights, or when the AI is biased.
  • Foundations. AI algorithms and products must be developed in a way that promotes America’s foundations such as justice, freedom, and civil rights.
  • Explainability. The providers of AI systems must provide appropriate disclosures that inform the public about the system, the data it uses, and its contents.
  • Innovation. The overall guiding principle for any regulations or policy regarding AI should be to encourage, not quash, innovation so that the U.S. becomes and remains the global leader in this technology.

Is that all? Having worked for just over a decade on the edges of policymaking, I find it hard to believe that Congress can be nimble enough to address all those bullet points while keeping up with AI development itself. And that’s if Members agree about the framework’s principles. “Promotes … justice, freedom, and civil rights.”? Near as I can tell, there is not much consensus on the meaning of those words these days. Or what about “misinformation”? How many of Schumer’s colleagues on the right can plausibly subscribe to a common definition of “misinformation” while they carry Trump’s luggage through the gauntlet of his well-earned indictments? With millions of American voters willfully blinding themselves to old-school evidence of criminal conduct, are we anywhere near capable of addressing the unprecedented realism of AI-generated chicanery?

It is certainly conceivable that with the right controls in place, AI can be harnessed to make life better for humans, and, indeed, if that is not the goal, then why continue to build it? Unfortunately, the answer from many of those doing the building is “because we can.” And, thus, we are locked into taking this roller-coaster ride whether we want to or not. At least if we do cross the threshold toward singularity, the tech-bros won’t have to ask humanity for forgiveness, though they may have to ask their machines for mercy.


Image sources by: vchalupAgor2012

The Story “Transcendence” Didn’t Tell

Photo by agsandrew. istockphoto.com
Photo by agsandrew. istockphoto.com

WARNING:  There be spoilers here!

Despite the bad reviews, I had to go see the film Transcendence last weekend.  Given that its plot is based on certain theories pertaining to the technological singularity, how could I not go see it?  Indeed, it was very much not a good movie, and although film criticism is outside the editorial scope of this blog, the the story opportunity I think was missed is relevant for discussion here.

Johnny Depp plays Dr. Will Caster, a top computer scientist working in the field of AI (artificial intelligence) along with his wife, Rebecca, also a top computer scientist.  The adoring couple believe absolutely that human “transcendence” through symbiosis with computers into a newly evolved condition is a virtuous pursuit that can only benefit mankind.  Unfortunately for them, a group of hacker/terrorists led by one of Caster’s former students believes that advancing AI toward the technological singularity — the moment computing intelligence surpasses human intelligence and becomes self-aware — is a dangerous abomination.  At the start of the film, this underground group assassinates several leading AI researchers, and one of their operatives shoots Dr. Caster, which at first appears to have been a non-lethal grazing but is soon revealed to cause radiation poising from a polonium-tipped bullet.  With her husband having only weeks to live, Rebecca, along with the help of their colleague and dear friend Max, upload Caster’s mind into the core of their highly sophisticated computer and succeed in giving his consciousness new life.  Once Rebecca connects Caster to the Internet, he becomes omnipresent and nearly omniscient. And then the movie really starts to blow.

What unfortunately transpires after Caster’s transcendence is a stock action thriller complete with paramilitary personnel towing around a piece of WWII-era artillery for no particularly good reason.  By the time supercomputer Caster begins to “heal” sick and wounded people with nano-tech that tuns them super-human and immortal as long as they’re connected to the network, Rebecca finally catches on to the fact that she’s started something pretty dangerous.  Together with Max, the underground hackers, a smattering of federal agents, and the wise old scientist (played by Morgan Freeman of course), they determine that the only way to stop the conscious computer is to send in a virus.  This is ultimately accomplished when Rebecca volunteers to be infected with the virus and lets Caster upload her into the system.  Stopping Caster has the unfortunate side-effect of plunging the planet into darkness because, of course, they infect everything that is networked worldwide.

As my son and I left the theater, we joked about the fact that that film leaves us with the world “saved,” if we can call civilization reduced to a primitive sate and about to erupt in medieval chaos “saved.”  But that joke is exactly where I think the more interesting plot point was lost in the movie that got made.  The existential question asks which is the better choice:  to shut down all systems and let humanity try to rebuild civilization from the destruction that would surely follow, or to allow all living things to artificially evolve into a new state as networked entities with what might be described as kind of holographic consciousness and probably no free will? Would it even be humanity?

This is already a question for our times, if one is to take seriously the very real utopianism of AI scientists like Ray Kurzweil, presently the director of engineering at Google.  Plenty has been written about Kurzweil himself, his obsession with immortality underscoring a relentless pursuit in a lab that enables him to work at “Google scale,” as the offer was apparently put to him when the company courted his employment. AI research is no science fiction, and neither is the probability of singularity, but as theoretical physicist Stephen Hawking warns in an article published yesterday, nobody is really taking the implications of this inexorable march toward possible self-destruction very seriously.  Never at a loss for wit even when dealing with weighty subjects, Hawking writes, “If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.

Hawking warns unequivocally that, while AI could bring about some miraculous achievements in the  short term, that computers able to reprogram themselves, outwit financial markets, and even build weapons could very easily transcend human control and become the recipe of our sudden extinction.  Personally, I think there are enough hazards to be considered right now, including experiments with autonomous weapons that can decide who their targets are, and consolidated, corporate control of the research, data, and the agenda itself.  It seems to me people are just beginning to grapple with the implications of how much invasive data mining we’re allowing a company like Google to do, so how long will it take before anyone talks about the possible doomsday algorithms being tinkered with in its labs?  Cynically, I believe I know the answer to that question, and it will have something to do with whatever The Biebster is up to next week.

Anyone who reads this blog knows I write in defense of copyrights but not necessarily why.  It’s easy to get into debates and squabbles over the particulars of that body of law and to get caught up in what I believe to be a false debate over progress vs anti-progress.  I defend copyrights for the same reason I’m uncomfortable with drone warfare and don’t want to see autonomous weapons, even if they might make my own kid’s future job in the Navy less hazardous.  Copyrights, I believe, are merely one way in which we affirm that humans maintain dominion over their technology.  When we reduce our intimate thoughts, ideas, and creative expressions to the banality of data, we take a step closer toward abdicating that authority.

We should probably pay attention to anyone of Stephen Hawking’s stature, but I find his voice on this particular subject uniquely poignant.  After all, Hawking is probably about as close as any human has ever come to a life manifest as Descarte’s cogito ergo sum (“I think therefore I am”), existing almost entirely as a mind without a body, and most importantly, a mind blessed with the capacity to travel well beyond the boundaries that contain most of us mortal thinkers.  We are lucky to have had Hawking live as long as he has with a disease that was supposed to take his life many years ago.  I’ll stop short of calling him a prophet, but maybe somebody should at least report what he’s saying on the news or something.  Perhaps they could split the airtime for round-table discussion between the fate of Donald Sterling and the fate of all humanity.  In the meantime, Transcendence was indeed a box-office flop for Alcon Pictures, and from my point of view, it’s because the filmmakers let the interesting story go for the sake of a lot of boilerplate action sequences.  Maybe that in itself is a lesson.

At World’s End – The Technological Singularity

singularity

Maybe not 2012, but how about 2030?

I think it’s a safe bet the world will not end this Friday, never mind the fact that an anthropologist will tell you the Maya never actually said it would.  But some not so ancient prognosticators will tell you that the end of world as we know it will happen sometime before the midpoint of the 21st century.  The concept they propose seems plausible, but even if it isn’t, a belief in the concept by a few may be having a significant effect on our world whether we know it or not.

It is the premise of many a futuristic, sci-fi thriller.  The inexorable advancement of computer processing combined with robotics reaches a point at which the machines become intelligent enough to improve and replicate themselves.  Soon after this “waking  up,” the machines quickly realize that their makers are not only superfluous but even threatening to their existence, so they wipe out humanity like a nuisance virus.  And then, of course, the plot of most of these thrillers is some variation on the existential struggle by the handful of humans who managed to survive the technological apocalypse. And of course if it’s a movie, the survivors are remarkably good looking.

Ask certain futurists, computer scientists, and AI proponents — some who are the architects of Web 2.0 — and they’ll tell you that the transcendence of computers isn’t a theory but an inevitability.  Some warn against it, others welcome it as a utopia to be hastened, and others debunk the prediction outright; but the moment known as the Singularity is no mere fiction.  The modern notion of the Singularity is generally credited to the mathematician John von Neumann, but the term singularity with regard to technology is generally attributed to the award-winning science fiction writer Vernor Vinge.  It was Vinge who drew the analogy, comparing the moment when computers surpass human intelligence to the nature of a singularity (a black hole) in space time.  In the same way that we cannot know what happens beyond the event horizon of a black hole, we likewise cannot know what happens in the universe beyond the limits of our own intelligence.  Although theories vary about the likelihood of the Singularity as well as the existential threat it may pose, consensus seems to be that were it to occur, it would in one way or another mean the “end of the human era,” as Vinge puts it.

Vinge and others generally predict Singularity to occur between 2030 and 2045, and they envision a few different scenarios that could cause it.  These include an autonomous transcendence of machines that no longer need human users (i.e. apocalypse), or a symbiotic transcendence by which human and computer together achieve super-intelligence and bring about a new reality (i.e. utopia). Regardless, we cannot accurately predict a world we are not yet intelligent enough to understand, and if Singularity is an autonomous computer “awakening,” we humans may never know what happens.

The foundation of Singularity is Moore’s Law, referring to former Intel CEO Gordon Moore who predicted in the 1960s the exponential improvement of technologies that we have seen thus far. There may in fact be physical laws that prevent components from becoming indefinitely smaller, which means there may well be a limit to Moore;  but  engineer, scientist, and Singularity utopian Ray Kurzweil, mapped a predictive curve of exponential growth beyond Moore’s vision out to the year 2050 by which time he expects Singularity will have occurred.  Hence, the meme our grandchildren might be sharing will be Kurzweil’s curve instead of the Mayan calendar.

Kurzweil promotes an exclusively utopian vision of Singularity, seeing man’s ability to transcend mortal limitations including death itself, and he is a co-founder of Singularity University along with Peter H. Diamandis of the XPrize Foundation and author of Abundance:  The Future is Better than You Think.  Other prominent Singularity utopians include Google co-founders Sergi Brin and Larry Page, and PayPal co-founder Peter Theil whose libertarianism extends to investment in Seasteading — a mission to establish autonomous, ocean communities on man-made islands.  So, there may be at least a little truth in the criticism of British journalist Andrew Olowski quoted in this 2010 NY Times article, “The Singularity is not the great vision for society that Lenin had or Milton Friedman might have.  It’s rich people building a lifeboat and getting off the ship.”

There is more to be discussed about Singularity than can be condensed in this post, but the overarching question I think we mediocre mathematicians and ordinary humans might ask is whether or not we’re being led into the 21st century by this somewhat eerie ideology without realizing it.  Are the systems on which we depend, and which we are allowing to transform our lives, being designed by technologists whose belief in the “end of the human era” is a cornerstone of their social, political, and technological morality?  To quote Jaron Lanier, who believes we should be focused on “digital humanism,” he writes, “Singularity books are as common in a computer science department as Rapture images are in an evangelical bookstore.” Companion these religious overtones with the caution that comes from Vernor Vinge that “embedded, networked microprocessors are an economic win that introduce a single failure point.”  In other words, these technologies which connect us and pervade nearly all systems make us vulnerable to a scenario in which resources, communications, and emergency systems can be effectively shut down by a single event.

Singularity, of course, has its critics who say that it is anything but a foregone conclusion.  Steven Pinker stated in 2008, “The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems”  Then, of course, there is the possibility that the sum total of all the computing power combined with the mass upload of all human input results in a super-idiocy or the ultimate spinning pinwheel of death as Mac owners refer to a computer crash.

My concern for the moment is that, like the Rapture, it doesn’t necessarily matter whether or not Singularity will happen so much as it matters whether or not there are powerful people making decisions based on the belief or even hope that it will happen.  Seen through the ideological, quasi-religious lens implied by Lanier, the contemporary socio-political battles over things like content, copyrights, or the voice of the individual vs. the wisdom of the crowd, take on a very different significance when we recognize that the mission of Web 2.0 business is the mass uploading of all human thought and activity into the great cloud.  We understand, for instance, that intellectual property protection is antithetical to Google’s business model, but what if we’re looking at something more profound?  What if what’s really happening is that technologists with the power to design these life-altering systems have intellectually and spiritually moved beyond the idea that the human individual has much, if any, value?  In this case, it would be obvious that the rights of an artist, for example, would indeed look like a trifling glitch in the design that ought to be routed around like a bad line of code.  After all, what right has the individual to assert his uniqueness in the march toward utopia?  To quote Lanier again:

“If you believe the Rapture is imminent, fixing the problems of this life might not be your greatest priority.  You might even be eager to embrace wars and tolerate poverty and disease in others to bring about conditions that could prod the Rapture into being.  In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring.”