WARNING: There be spoilers here!
Despite the bad reviews, I had to go see the film Transcendence last weekend. Given that its plot is based on certain theories pertaining to the technological singularity, how could I not go see it? Indeed, it was very much not a good movie, and although film criticism is outside the editorial scope of this blog, the the story opportunity I think was missed is relevant for discussion here.
Johnny Depp plays Dr. Will Caster, a top computer scientist working in the field of AI (artificial intelligence) along with his wife, Rebecca, also a top computer scientist. The adoring couple believe absolutely that human “transcendence” through symbiosis with computers into a newly evolved condition is a virtuous pursuit that can only benefit mankind. Unfortunately for them, a group of hacker/terrorists led by one of Caster’s former students believes that advancing AI toward the technological singularity — the moment computing intelligence surpasses human intelligence and becomes self-aware — is a dangerous abomination. At the start of the film, this underground group assassinates several leading AI researchers, and one of their operatives shoots Dr. Caster, which at first appears to have been a non-lethal grazing but is soon revealed to cause radiation poising from a polonium-tipped bullet. With her husband having only weeks to live, Rebecca, along with the help of their colleague and dear friend Max, upload Caster’s mind into the core of their highly sophisticated computer and succeed in giving his consciousness new life. Once Rebecca connects Caster to the Internet, he becomes omnipresent and nearly omniscient. And then the movie really starts to blow.
What unfortunately transpires after Caster’s transcendence is a stock action thriller complete with paramilitary personnel towing around a piece of WWII-era artillery for no particularly good reason. By the time supercomputer Caster begins to “heal” sick and wounded people with nano-tech that tuns them super-human and immortal as long as they’re connected to the network, Rebecca finally catches on to the fact that she’s started something pretty dangerous. Together with Max, the underground hackers, a smattering of federal agents, and the wise old scientist (played by Morgan Freeman of course), they determine that the only way to stop the conscious computer is to send in a virus. This is ultimately accomplished when Rebecca volunteers to be infected with the virus and lets Caster upload her into the system. Stopping Caster has the unfortunate side-effect of plunging the planet into darkness because, of course, they infect everything that is networked worldwide.
As my son and I left the theater, we joked about the fact that that film leaves us with the world “saved,” if we can call civilization reduced to a primitive sate and about to erupt in medieval chaos “saved.” But that joke is exactly where I think the more interesting plot point was lost in the movie that got made. The existential question asks which is the better choice: to shut down all systems and let humanity try to rebuild civilization from the destruction that would surely follow, or to allow all living things to artificially evolve into a new state as networked entities with what might be described as kind of holographic consciousness and probably no free will? Would it even be humanity?
This is already a question for our times, if one is to take seriously the very real utopianism of AI scientists like Ray Kurzweil, presently the director of engineering at Google. Plenty has been written about Kurzweil himself, his obsession with immortality underscoring a relentless pursuit in a lab that enables him to work at “Google scale,” as the offer was apparently put to him when the company courted his employment. AI research is no science fiction, and neither is the probability of singularity, but as theoretical physicist Stephen Hawking warns in an article published yesterday, nobody is really taking the implications of this inexorable march toward possible self-destruction very seriously. Never at a loss for wit even when dealing with weighty subjects, Hawking writes, “If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.
Hawking warns unequivocally that, while AI could bring about some miraculous achievements in the short term, that computers able to reprogram themselves, outwit financial markets, and even build weapons could very easily transcend human control and become the recipe of our sudden extinction. Personally, I think there are enough hazards to be considered right now, including experiments with autonomous weapons that can decide who their targets are, and consolidated, corporate control of the research, data, and the agenda itself. It seems to me people are just beginning to grapple with the implications of how much invasive data mining we’re allowing a company like Google to do, so how long will it take before anyone talks about the possible doomsday algorithms being tinkered with in its labs? Cynically, I believe I know the answer to that question, and it will have something to do with whatever The Biebster is up to next week.
Anyone who reads this blog knows I write in defense of copyrights but not necessarily why. It’s easy to get into debates and squabbles over the particulars of that body of law and to get caught up in what I believe to be a false debate over progress vs anti-progress. I defend copyrights for the same reason I’m uncomfortable with drone warfare and don’t want to see autonomous weapons, even if they might make my own kid’s future job in the Navy less hazardous. Copyrights, I believe, are merely one way in which we affirm that humans maintain dominion over their technology. When we reduce our intimate thoughts, ideas, and creative expressions to the banality of data, we take a step closer toward abdicating that authority.
We should probably pay attention to anyone of Stephen Hawking’s stature, but I find his voice on this particular subject uniquely poignant. After all, Hawking is probably about as close as any human has ever come to a life manifest as Descarte’s cogito ergo sum (“I think therefore I am”), existing almost entirely as a mind without a body, and most importantly, a mind blessed with the capacity to travel well beyond the boundaries that contain most of us mortal thinkers. We are lucky to have had Hawking live as long as he has with a disease that was supposed to take his life many years ago. I’ll stop short of calling him a prophet, but maybe somebody should at least report what he’s saying on the news or something. Perhaps they could split the airtime for round-table discussion between the fate of Donald Sterling and the fate of all humanity. In the meantime, Transcendence was indeed a box-office flop for Alcon Pictures, and from my point of view, it’s because the filmmakers let the interesting story go for the sake of a lot of boilerplate action sequences. Maybe that in itself is a lesson.
One of Philip k dick’s most prescient stories is called “Autofac,” written nearly 60 years ago. In the story, AI takes over the world not with weapons but with factories strip mining all the earth’s resources to produce goods the humans neither want nor need. I’m wondering what Jeff Bezos thinks of that story.
I saw the movie too and you didn’t seem to reveal the major plot twist, even though the whole movie revolves around it. 🙂
So my opinion is I don’t think superintelligence is nessiarly equivalent to consciousnesses. We already have machines that show superintelligence in specialized tasks. ENIAC was better then any human ever was in arithmetic, and the gap has only gotten wider since. You can probably put a computer on your desk that surpasses the “combined intelligence of every person ever born” (-Johnny Depp) when it comes to doing arithmetic. Considered, at least before computers, to an intellectual task. It’s the most “degenerate” case of super-intelligence, because it’s the thing that computers are arguably best at.
Many other examples: being able to recall facts out of billions of data points perfectly, that’s beyond the capability of the human mind and computers can do it effortlessly. There even exists algorithms for data retrieval using O(1) data structures (meaning, no degradation in performance as the number of data points increases), and highly practical algorithms are O(log n) at worst. Chess, checkers. Again intellectual games in which computer surpass human ability.
So what are humans really good at? The thing that computers are not nearly as good as humans right now is pattern recognition. It seems from what we understand of the human brain, it is mostly a machine wired for pattern recognition. And it’s really, really good at it. Recognizing a face for instance? Seems trivial, effortless. But “behind the scenes”, the amount of computation needed to do that is quite high.
The algorithms you can run on a computer that do well in these kinds of tasks (pattern recognition in high dimensional space) are deep neural networks. The human mind, in indeed most animal minds, are a kind of biological deep neural network. Perhaps by taking at least a little bit of inspiration from them, you can engineer their power.
We didn’t know how to train deep neural networks until recently. Methods to train them effectively only started coming out recently!! The first truly effective method for training a deep neural network (deep learning), and only for a special type of neural network called a deep belief network which has a restricted Boltzmann machine at the top layer, came out 8 years ago!
The computer science research community has been discovering new methods ever since. Google is in the forefront of this in the commercial sector. So what I am saying is, the topic of pattern recognition and learning in computer science has been revolutionized in only the past few years. It seems that computers will get super-intelligent in many more tasks they have been struggling with for decades. I don’t see this stopping any time soon.
But still, even have decades and decades of search: there is no hint yet on what would truly be need for artificial general intelligence (AGI) or consciousnesses. That is still an utter mystery.
I’m not saying we never will crack AGI. It might be one of those things that is just a tiny improvement to an existing technique will produce some kind of ridiculous breakthrough. But there is no real roadmap or direction to get there. It really is a mystery.
And one last thing (probably?). None of this big tech companies with AI labs are investing tons of money into the AGI problem. It’s about applying AI to practical problems to their business. It’s possible that there are people at Google especially working AGI/technological singularity related research to please Kurzweil. But really Google (and really any tech company) is mostly interested in applying AI to improve their services and generally to make more money. And that doesn’t require developing AI with sentience…
PRI (Public Radio International) did a story on this… well… story!
It was done by their “Sci Fri” (Science Fridays), and can be found via their website/podcast
http://www.sciencefriday.com/audio/scifriaudio.xml