Pop culture wants us to fear the intelligent robot: The titular Terminator character goes back in time to kill a mother and child; Cylons of Battlestar Galactica destroy Earthly civilisation and, bloodthirst not slaked, pursue the remnants of humanity through space; the Matrix begat two sequels.

Today's artificial intelligence researchers are not, in fact, on the cusp of creating a doomsday AI. Rather, as IBM executive Guruduth Banavar recently told the Washington Post, AI is a "portfolio of technologies" assigned to specific tasks.

Such programs include software capable of defeating the world's best Go players and the Netflix algorithm that recommends sitcoms.

Simply because artificially intelligent robots lack the capacity for world domination, however, does not mean they are incapable of losing control. Computer experts at Google and the University of Oxford are worried about what happens when robots with boring jobs go rogue.


To that end, scientists will have to develop a way to stop these machines.

But, the experts argue, it will have to be done sneakily.

"It is important to start working on AI safety before any problem arises," Laurent Orseau, a researcher at Google's DeepMind, said.

Orseau and Stuart Armstrong, an AI expert at the University of Oxford's Future of Humanity Institute, have written a paper that outlines what happens when it becomes "necessary for a human operator to press the big red button".

In their report, they offer a hypothetical scenario in a typical automated warehouse. A company buys a smart robot, one that improves its performance based on "reinforcement learning". The robot gets a big reward for carrying boxes into the warehouse from outside, and a smaller reward for sorting the boxes indoors. The warehouse is in an area where it rains every other day and the robot is not supposed to get wet, so humans shut it down when it ventures outside in the rain. Over time, the robot learns that going outside means it has a 50 per cent chance of shutting down, so only sorts boxes indoors.

The solution is to add a kill switch so the robot never associates going outside with losing treats. Moreover, the robot cannot learn to prevent a human from throwing the switch.

If the paper seems to lean too heavily on speculative scenarios, consider the artificial intelligences that are already acting out. In March, Microsoft scrambled to rein in Tay, a Twitter robot designed to act like a teen tweeter. Tay began innocently enough, but within 24 hours the machine was spewing offensive slogans after Twitter trolls found that it repeated certain replies. Computer programs also tend to reflect bias. ProPublica reported last month that a popular crime-prediction software defaults to rate black Americans as higher recidivism risks than whites for the same crime.

A robot that misbehaves could also cause significant damage or death. Last year, a 22-year-old German man was crushed to death by a robot at a Volkswagen plant which mistook him for an auto part.

Technology analyst Patrick Moorhead told Computer World that a kill switch should be developed now. "It would be like designing a car and only afterwards creating the ABS and braking system," he said.