Print this page

The Singularity

Recall the idea of Moore's Law from our lesson on Artificial Intelligence. The theory was that computing power doubles, exponentially, every two years. If Moore's Law holds, some people argue, there will be computers that can think better than human beings at some point during this century. People are now not just focusing on Moore's law, but also on the field of quantum computing, often discussed in the media in terms of the "quantum supremacy" that Google and IBM and others are trying to achieve in competition with one another to some extent. The"suremacy" buzzword is a bit alarming, bringing with it resonances of Nazi world conquest and Terminator- or Matrix-like domination of humanity.

In the 2010s, Ray Kurzeil, now over 70, was doing everything he could to stay healthy so he could live long enough to enjoy the benefits of the Singularity

Ray Kurzweil, a well-known futurist, and (since 2012) director of engineering at Google, famously did the math in 2005, and people often refer to his prophecies when estimating the moment where machine intelligence will transcend human intelligence. Kurzweil's book The Singularity is Near: When Humans Transcend Biology, made several radical predictions. One of the most controversial was the claim that, according to the current exponential rate of computational power growth, there will be artificial brains more powerful and complex than our human brains by 2045 (or some shifting date that is always in the 21st century, but may be closer or further away depending on the speculation). Kurzweil refers to this moment as the technological "singularity" - riffing on the term for the event horizon of a black hole in physics, a point past which we are currently incapable of thinking.

It's not entirely clear what it means to say the brains will be "more powerful" after the Singularity; they can already learn to play chess better than any human in four hours. As an educator, I get a certain amount of gratification out of the fact that the key to AI is increasingly thought to be about learning, but of course the whole idea is that that the systems are being engineered to teach themselves ... Perhaps the Singularity will bring some form of autonomous consciousness to the machines, or to the machine systems (possibly also bioengineered), and they will know what to do about everything, because they can think about more different things much faster than any individual human being can.

Kurzweil and other Singularity fans see the moment when technology outstrips biology as a moment in human history so unprecedented that it is impossible to make accurate predictions about the social or political implications it will have. One of the first tasks put to a super-computer (it may not actually be a bolts-and-circuits computer in our sense by then), many people assume, would be to design a new and even better super-intelligent-machine. That "computer" in turn would design a better one, and so on and so forth. The result would be a kind of intelligence explosion, as the AI neural network (or whatever it is by then) frees itself from the limited abilities and imaginations of the human brain.

As author Frank S. Robinson put it,

The smart machines will take over their own further improvement. And remember that artificial systems can share the contents of their "minds" more directly than humans. Thus we can envision the intelligence not just of self-contained machines, but of a worldwide network - a global network if you will - thus again unleashing synergized brainpower that totally dwarfs what humans can currently deploy. (Robinson)

For the first time, our technology (constantly re-designing itself at an exponential rate) would be beyond our human ability to understand it. And also, many fear, to control it.

There is still debate amongst scientists and computer engineers concerning the potential abilities, constraints, implications, and nature of AI. As discussed previously in the course, there is a debate between those who see consciousness or mind as the property of human animal biology and those who imagine this intangible phenomenon emerging from any system complex enough to demand it (leaving aside the lingering non-scientific view that intelligence is a god-given immaterial attribute of the chosen human species).

There is a further debate amongst those who do assume that machines could achieve sentience and possibly autonomy. It concerns the social ramifications of this imminent artificial intelligence era - whether this new power will work for us, with us, against us, or ignore us.

Will this mean new friends for humans who will feel less alone in the universe? Will it mean new servants, the slaves so many people seem to secretly (or not so secretly) wish they had now? Will the new superintelligences become our new masters? Or will it be a kind of "person" or entity we can't even relate to in any meaningful way?

Utopian and dystopian scenarios

Transhumanist optimists like Ray Kurzweil look forward to the future as a liberation from our animal origins. Kurzweil, who turned 72 in February 2020, practices a careful health regime because he hopes to be alive for the Singularity. Others have also rushed to embrace this almost religious moment (it bears some comparison to the Christian ideas of "The Fullness of Time," "The Last Judgment," or "The Second Coming" when it comes to our inability to think beyond it and the faith some people have in it as our salvation). Kurzweil imagines that the supercomputers - or whatever exactly they are - will put an end to human misery. In short order, all or most of our planet's and our species' problems will be solved! Poverty and inequality, war, racism, intolerance, disease, mental illness, and all environmental issues will be relieved by solutions which the superintelligence comes up with at speeds we simply can't imagine today. Our own minds will be "uploaded" into the superintelligence and merge with it. We will become immortal and we will live as gods, all thanks to greater computer processing power.

More pessimistic people who do nevertheless accept the idea of a coming technological leap look to the history of "unforeseen consequences" that the rise of human technology has always seemed to carry in its wake. Death and destruction could be the order of the day for one reason or another. If the AIs are still under the control of their human creators, one can expect the usual human obscenities to be horrifically amplified. Only the rich or those in power will have access to these superintelligences, for instance; only the privileged will be able to afford to merge with them. Two species will be the result: unenhanced humans, and the re-engineered superintelligent new species. The former will be the slaves of the latter, perhaps, or will live in a different world altogether from the new posthumans, as in the rather silly 2013 sci-fi film Elysium.

Or: the superintelligence will be used for destructive purposes. Biological weapons, viruses that target the way the brain works, and so forth will be created by the superintelligences under the control of government power-mongers or terrorists wanting to gain power over the rest of humanity or to destroy everyone who is not on their side.

Following the 2014 release of the dystopian sci-fi movie Transcendence, world-renowned physicist Stephen Hawking, one of the smartest humans then alive, led a group of scientists in a call to humanity at large to take AI seriously and to recognize the potential benefits and risks that lie on the horizon. They asserted that we need to start putting serious resources into studying and thinking about "what we can do now to improve the chances of reaping the benefits and avoiding the risks" (Hawking et al. 2014). In 2023, AI "godfather" Geoffrey Hinton quit his job (he was also 75 years old) and issued warnings and the suggestion of slowing down AI research or a moratorium on further development while we think through the possible consequences. This kind of proactive deliberation has not been very common in the history of human technological advances. But that doesn't mean we couldn't do it! Is it not possible that by thinking things through and slowing down if necessary we could avoid the worst outcomes of our own cleverness, as we have not really done with industrialization, for instance, or the development of nuclear weapons?

Finally, and perhaps most likely of all, these superintelligences may simply have no use for human beings, and will end up doing things that humans can't even appreciate or understand. The superbeings, whether biological, electro-mechanical, some mix of the two or something else altogether, will no longer think much like humans; their values and motives will have little to do with what humans care about; they may well look on the humans who made their existence possible with as little regard as we look upon the unintelligent single-celled life forms from which we evolved. They may let us die, kill us if we cause them any inconvenience, use us for experimentation, or much more likely simply ignore us as though we were insignificant insects.

This idea was dramatized in a less chilling, but quite heartbreaking way in the 2013 film Her, where a human man falls in love with the AI built into his smart network. Their relationship is deep and emotionally meaningful for them both, but eventually he learns that "she" has been carrying on similar relationships with hundreds of other people around the Internet, and in the end - in a uniquely human spin on the Singularity - "she" outgrows their relationship and has to move on, leaving him alone once again. Another worry, then, is that we may look up to and want intimacy with the AI superiors we have created, but they may not be able to appreciate us in return.

Many assume that we are literally creating an alien intelligence more powerful than our own, coming not from outer space but from our own unbridled ingenuity. Some look forward to this intelligence taking the place of God; others are afraid it will be more like an inhuman extraterrestrial monster. Meanwhile, people are becoming companionate with the early forms because they like them better than dealing with real other human beings ...

NEXT

Print this page