How can we ensure that computers do what we want them to do when they are increasingly doing it for themselves?

That may sound like an abstract philosophical question, but it is also an urgent practical challenge, according to Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the world's leading thinkers on artificial intelligence.

It is all too easy to imagine scenarios in which increasingly powerful autonomous computer systems cause terrible real-world damage, either through thoughtless misuse or deliberate abuse, he says. Suppose, for example, in the not-too-distant future that a care robot

Advertisement
Advertisement