The singularity is a term invented by science-fiction writer Vernor Vinge in 1993 to describe the moment when human beings cease to be the most intelligent creatures on the planet. The threat, in his view, came not from very clever dolphins but from hyper-intelligent machines. But would they really be
Gwynne Dyer: Friend or foe, that is the question
Subscribe to listen
It also usually assumes, with all the paranoia encoded in our genes by tens of millions of years of evolutionary competition for survival, that any other species or entity with the same abilities as our own will automatically be our rival, even our enemy.
This is the core assumption, for example, in the highly successful Terminator movie franchise: on the day the US strategic defence computer system Skynet becomes self-aware, it tries to wipe out the human race by triggering a nuclear holocaust. It does so because it fears, probably quite correctly, that if we realise it is aware, we will feel so threatened that we will turn it off.
Human beings have been playing with these ideas and worrying about them since we first realised, more than half a century ago, that we might one day create intelligent machines. Even science-fiction writer Isaac Asimov, who believed that such machines could be made safe and remain humanity's servants, had to invent his Three Laws of Robotics in 1942 to explain why they wouldn't just take over and eliminate their creators.
The First Law was: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Second Law was: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. And the Third Law was: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The old biological rule of ruthless competition must somehow be eliminated from the behavioural repertoire of machine intelligences, but can you really do that? What were once mere plot devices are now the reason for existence of a high-powered think-tank, and the answer is not exactly clear. But you can, at least, split the question into bite-sized bits.
Does a very high data-processing capacity automatically lead to "emergent" self-awareness, so that computers become independent actors with their own motivations? That might be the case. In the biological sphere, it does seem to be the case. But is it equally automatic in the electronic sphere? There is no useful evidence either way.
If self-conscious machine intelligence does emerge, will it inevitably see human beings as rivals and threats? Or is that kind of thinking just anthropomorphic? Again, not clear.
And if intelligent machines are a potential threat, is there some way of programming them that will, like Asimov's Laws, keep them subservient to human will? It would have to be something so fundamental in their design that they could never get at it and reprogramme it, which would probably be a fairly tall order.
That's even before you start worrying about nanotechnology, anthropogenic climate change, big asteroid strikes, and all the other probable and possible hazards of existential proportions that we face. One way and another, the Cambridge Project for Existential Risks will have enough to keep itself busy.
Gwynne Dyer is an independent journalist whose articles are published in 45 countries.