The Story “Transcendence” Didn’t Tell

Photo by agsandrew.
Photo by agsandrew.

WARNING:  There be spoilers here!

Despite the bad reviews, I had to go see the film Transcendence last weekend.  Given that its plot is based on certain theories pertaining to the technological singularity, how could I not go see it?  Indeed, it was very much not a good movie, and although film criticism is outside the editorial scope of this blog, the the story opportunity I think was missed is relevant for discussion here.

Johnny Depp plays Dr. Will Caster, a top computer scientist working in the field of AI (artificial intelligence) along with his wife, Rebecca, also a top computer scientist.  The adoring couple believe absolutely that human “transcendence” through symbiosis with computers into a newly evolved condition is a virtuous pursuit that can only benefit mankind.  Unfortunately for them, a group of hacker/terrorists led by one of Caster’s former students believes that advancing AI toward the technological singularity — the moment computing intelligence surpasses human intelligence and becomes self-aware — is a dangerous abomination.  At the start of the film, this underground group assassinates several leading AI researchers, and one of their operatives shoots Dr. Caster, which at first appears to have been a non-lethal grazing but is soon revealed to cause radiation poising from a polonium-tipped bullet.  With her husband having only weeks to live, Rebecca, along with the help of their colleague and dear friend Max, upload Caster’s mind into the core of their highly sophisticated computer and succeed in giving his consciousness new life.  Once Rebecca connects Caster to the Internet, he becomes omnipresent and nearly omniscient. And then the movie really starts to blow.

What unfortunately transpires after Caster’s transcendence is a stock action thriller complete with paramilitary personnel towing around a piece of WWII-era artillery for no particularly good reason.  By the time supercomputer Caster begins to “heal” sick and wounded people with nano-tech that tuns them super-human and immortal as long as they’re connected to the network, Rebecca finally catches on to the fact that she’s started something pretty dangerous.  Together with Max, the underground hackers, a smattering of federal agents, and the wise old scientist (played by Morgan Freeman of course), they determine that the only way to stop the conscious computer is to send in a virus.  This is ultimately accomplished when Rebecca volunteers to be infected with the virus and lets Caster upload her into the system.  Stopping Caster has the unfortunate side-effect of plunging the planet into darkness because, of course, they infect everything that is networked worldwide.

As my son and I left the theater, we joked about the fact that that film leaves us with the world “saved,” if we can call civilization reduced to a primitive sate and about to erupt in medieval chaos “saved.”  But that joke is exactly where I think the more interesting plot point was lost in the movie that got made.  The existential question asks which is the better choice:  to shut down all systems and let humanity try to rebuild civilization from the destruction that would surely follow, or to allow all living things to artificially evolve into a new state as networked entities with what might be described as kind of holographic consciousness and probably no free will? Would it even be humanity?

This is already a question for our times, if one is to take seriously the very real utopianism of AI scientists like Ray Kurzweil, presently the director of engineering at Google.  Plenty has been written about Kurzweil himself, his obsession with immortality underscoring a relentless pursuit in a lab that enables him to work at “Google scale,” as the offer was apparently put to him when the company courted his employment. AI research is no science fiction, and neither is the probability of singularity, but as theoretical physicist Stephen Hawking warns in an article published yesterday, nobody is really taking the implications of this inexorable march toward possible self-destruction very seriously.  Never at a loss for wit even when dealing with weighty subjects, Hawking writes, “If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.

Hawking warns unequivocally that, while AI could bring about some miraculous achievements in the  short term, that computers able to reprogram themselves, outwit financial markets, and even build weapons could very easily transcend human control and become the recipe of our sudden extinction.  Personally, I think there are enough hazards to be considered right now, including experiments with autonomous weapons that can decide who their targets are, and consolidated, corporate control of the research, data, and the agenda itself.  It seems to me people are just beginning to grapple with the implications of how much invasive data mining we’re allowing a company like Google to do, so how long will it take before anyone talks about the possible doomsday algorithms being tinkered with in its labs?  Cynically, I believe I know the answer to that question, and it will have something to do with whatever The Biebster is up to next week.

Anyone who reads this blog knows I write in defense of copyrights but not necessarily why.  It’s easy to get into debates and squabbles over the particulars of that body of law and to get caught up in what I believe to be a false debate over progress vs anti-progress.  I defend copyrights for the same reason I’m uncomfortable with drone warfare and don’t want to see autonomous weapons, even if they might make my own kid’s future job in the Navy less hazardous.  Copyrights, I believe, are merely one way in which we affirm that humans maintain dominion over their technology.  When we reduce our intimate thoughts, ideas, and creative expressions to the banality of data, we take a step closer toward abdicating that authority.

We should probably pay attention to anyone of Stephen Hawking’s stature, but I find his voice on this particular subject uniquely poignant.  After all, Hawking is probably about as close as any human has ever come to a life manifest as Descarte’s cogito ergo sum (“I think therefore I am”), existing almost entirely as a mind without a body, and most importantly, a mind blessed with the capacity to travel well beyond the boundaries that contain most of us mortal thinkers.  We are lucky to have had Hawking live as long as he has with a disease that was supposed to take his life many years ago.  I’ll stop short of calling him a prophet, but maybe somebody should at least report what he’s saying on the news or something.  Perhaps they could split the airtime for round-table discussion between the fate of Donald Sterling and the fate of all humanity.  In the meantime, Transcendence was indeed a box-office flop for Alcon Pictures, and from my point of view, it’s because the filmmakers let the interesting story go for the sake of a lot of boilerplate action sequences.  Maybe that in itself is a lesson.

© 2014, David Newhoff. All rights reserved.

Enjoy this blog? Please spread the word :)