Maybe not 2012, but how about 2030?
I think it’s a safe bet the world will not end this Friday, never mind the fact that an anthropologist will tell you the Maya never actually said it would. But some not so ancient prognosticators will tell you that the end of world as we know it will happen sometime before the midpoint of the 21st century. The concept they propose seems plausible, but even if it isn’t, a belief in the concept by a few may be having a significant effect on our world whether we know it or not.
It is the premise of many a futuristic, sci-fi thriller. The inexorable advancement of computer processing combined with robotics reaches a point at which the machines become intelligent enough to improve and replicate themselves. Soon after this “waking up,” the machines quickly realize that their makers are not only superfluous but even threatening to their existence, so they wipe out humanity like a nuisance virus. And then, of course, the plot of most of these thrillers is some variation on the existential struggle by the handful of humans who managed to survive the technological apocalypse. And of course if it’s a movie, the survivors are remarkably good looking.
Ask certain futurists, computer scientists, and AI proponents — some who are the architects of Web 2.0 — and they’ll tell you that the transcendence of computers isn’t a theory but an inevitability. Some warn against it, others welcome it as a utopia to be hastened, and others debunk the prediction outright; but the moment known as the Singularity is no mere fiction. The modern notion of the Singularity is generally credited to the mathematician John von Neumann, but the term singularity with regard to technology is generally attributed to the award-winning science fiction writer Vernor Vinge. It was Vinge who drew the analogy, comparing the moment when computers surpass human intelligence to the nature of a singularity (a black hole) in space time. In the same way that we cannot know what happens beyond the event horizon of a black hole, we likewise cannot know what happens in the universe beyond the limits of our own intelligence. Although theories vary about the likelihood of the Singularity as well as the existential threat it may pose, consensus seems to be that were it to occur, it would in one way or another mean the “end of the human era,” as Vinge puts it.
Vinge and others generally predict Singularity to occur between 2030 and 2045, and they envision a few different scenarios that could cause it. These include an autonomous transcendence of machines that no longer need human users (i.e. apocalypse), or a symbiotic transcendence by which human and computer together achieve super-intelligence and bring about a new reality (i.e. utopia). Regardless, we cannot accurately predict a world we are not yet intelligent enough to understand, and if Singularity is an autonomous computer “awakening,” we humans may never know what happens.
The foundation of Singularity is Moore’s Law, referring to former Intel CEO Gordon Moore who predicted in the 1960s the exponential improvement of technologies that we have seen thus far. There may in fact be physical laws that prevent components from becoming indefinitely smaller, which means there may well be a limit to Moore; but engineer, scientist, and Singularity utopian Ray Kurzweil, mapped a predictive curve of exponential growth beyond Moore’s vision out to the year 2050 by which time he expects Singularity will have occurred. Hence, the meme our grandchildren might be sharing will be Kurzweil’s curve instead of the Mayan calendar.
Kurzweil promotes an exclusively utopian vision of Singularity, seeing man’s ability to transcend mortal limitations including death itself, and he is a co-founder of Singularity University along with Peter H. Diamandis of the XPrize Foundation and author of Abundance: The Future is Better than You Think. Other prominent Singularity utopians include Google co-founders Sergi Brin and Larry Page, and PayPal co-founder Peter Theil whose libertarianism extends to investment in Seasteading — a mission to establish autonomous, ocean communities on man-made islands. So, there may be at least a little truth in the criticism of British journalist Andrew Olowski quoted in this 2010 NY Times article, “The Singularity is not the great vision for society that Lenin had or Milton Friedman might have. It’s rich people building a lifeboat and getting off the ship.”
There is more to be discussed about Singularity than can be condensed in this post, but the overarching question I think we mediocre mathematicians and ordinary humans might ask is whether or not we’re being led into the 21st century by this somewhat eerie ideology without realizing it. Are the systems on which we depend, and which we are allowing to transform our lives, being designed by technologists whose belief in the “end of the human era” is a cornerstone of their social, political, and technological morality? To quote Jaron Lanier, who believes we should be focused on “digital humanism,” he writes, “Singularity books are as common in a computer science department as Rapture images are in an evangelical bookstore.” Companion these religious overtones with the caution that comes from Vernor Vinge that “embedded, networked microprocessors are an economic win that introduce a single failure point.” In other words, these technologies which connect us and pervade nearly all systems make us vulnerable to a scenario in which resources, communications, and emergency systems can be effectively shut down by a single event.
Singularity, of course, has its critics who say that it is anything but a foregone conclusion. Steven Pinker stated in 2008, “The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems” Then, of course, there is the possibility that the sum total of all the computing power combined with the mass upload of all human input results in a super-idiocy or the ultimate spinning pinwheel of death as Mac owners refer to a computer crash.
My concern for the moment is that, like the Rapture, it doesn’t necessarily matter whether or not Singularity will happen so much as it matters whether or not there are powerful people making decisions based on the belief or even hope that it will happen. Seen through the ideological, quasi-religious lens implied by Lanier, the contemporary socio-political battles over things like content, copyrights, or the voice of the individual vs. the wisdom of the crowd, take on a very different significance when we recognize that the mission of Web 2.0 business is the mass uploading of all human thought and activity into the great cloud. We understand, for instance, that intellectual property protection is antithetical to Google’s business model, but what if we’re looking at something more profound? What if what’s really happening is that technologists with the power to design these life-altering systems have intellectually and spiritually moved beyond the idea that the human individual has much, if any, value? In this case, it would be obvious that the rights of an artist, for example, would indeed look like a trifling glitch in the design that ought to be routed around like a bad line of code. After all, what right has the individual to assert his uniqueness in the march toward utopia? To quote Lanier again:
“If you believe the Rapture is imminent, fixing the problems of this life might not be your greatest priority. You might even be eager to embrace wars and tolerate poverty and disease in others to bring about conditions that could prod the Rapture into being. In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring.”
You have articulated a lot of my own feelings in a great way . See also Adam Curtis’s tv series All Watched Over By Machines Of Loving Grace for tangential but related ideas about techno-utopianism. It was originally shown on the BBC.
Thank you for reading and for taking the time to comment. I have seen that BBC piece and found it very intriguing although leaving me wanting it to be a little less frenetic in style. Still, worth the watch.
All the best,
DN
This painted the singularity in a way that I haven’t thought of before. I personally have always been of the hope that science and technology will fix most if not all problems. I always envision a world seen in Corey Doctorow’s Down and Out in The Magic Kingdom. It doesn’t deal with the Singularity so much as it does a post-scarcity economy, but if something like the Singularity could get us there, then I’m all for it. I can only hope that if technologist are indeed aiming for or after the Singularity, then they’re are doing it with the best intentions. Otherwise what hope do we have anyway? I don’t see this as something that can be regulated in any meaningful way.
Thanks for commenting. Speaking for myself, I’m not entirely comfortable hoping people who wield this kind of transformative power have good intentions so much as I think it’s our right and responsibility to insist upon it. Then, of course, there’s the bugaboo of unintended consequences, particularly when putting too much faith in technology.
The very idea of post scarcity seems so ridiculous that I don’t think it’s possible to talk about it.
Cheers David for another thought-provoking read. Whilst I
think the film Elysium was meant as a morality play about current
treatment of refugees, it also predicts the Seasteading idea, in
space rather than in the ocean. The film had ham-fisted elements
but I liked the way it presented technological advances (such as
atomic level medical treatments) as reserved for the
uber-rich.
Thank you, Glenn. Will have to check out the film.