Participants were given "homework" to submit entries for worst-case scenarios. They had to be realistic - based on current technologies or those that appear possible - and five to 25 years in the future. The entrants with the "winning" nightmares were chosen to lead the panels, which featured about four experts on each of the two teams to discuss the attack and how to prevent it.
Turns out many of these researchers can match science-fiction writers Arthur C Clarke and Philip K Dick for dystopian visions. In many cases, little imagination was required - scenarios like technology being used to sway elections or new cyber attacks using AI are being seen in the real world, or are at least technically possible. Horvitz cited research that shows how to alter the way a self-driving car sees traffic signs so that the vehicle misreads a "stop" sign as "yield". The possibility of intelligent, automated cyber attacks is the one that most worries John Launchbury, who directs one of the offices at the US Defence Advanced Research Projects Agency (Darpa), and Kathleen Fisher, chairwoman of the computer science department at Tufts University, who led that session.
What happens if someone constructs a cyber weapon designed to hide itself and evade all attempts to dismantle it? Now imagine it spreads beyond its intended target to the broader internet. Think Stuxnet, the computer virus created to attack the Iranian nuclear programme that got out in the wild, but stealthier and more autonomous.
"We're talking about malware on steroids that is AI-enabled," said Fisher, who is an expert in programming languages. Fisher presented her scenario under a slide bearing the words "What could possibly go wrong?"
How did the defending blue team fare on that one? Not well, said Launchbury. They argued that advanced AI needed for an attack would require a lot of computing power and communication, so it would be easier to detect. But the red team felt that it would be easy to hide behind innocuous activities, Fisher said. For example, attackers could get innocent users to play an addictive video game to cover up their work.
To prevent a stock-market manipulation scenario dreamed up by University of Michigan computer science professor Michael Wellman, blue team members suggested treating attackers like malware by trying to recognise them via a database on known types of hacks. Wellman, who has been in AI for more than 30 years and calls himself an old-timer on the subject, said that approach could be useful in finance.
Beyond actual solutions, organisers hope the doomsday workshop started conversations on what needs to happen, raised awareness and combined ideas from different disciplines. The Origins Project plans to make public materials from the closed-door sessions and may design further workshops around a specific scenario or two, Krauss said.
Darpa's Launchbury hopes the presence of policy figures among the participants will foster concrete steps, like agreements on rules of engagement for cyber war, automated weapons and robot troops.
Krauss, chairman of the board of sponsors of the group behind the Doomsday Clock, a symbolic measure of how close we are to global catastrophe, said some of what he saw at the workshop "informed" his thinking on whether the clock ought to shift even closer to midnight. But don't go stocking up on canned food and moving into a bunker in the wilderness just yet.
"Some things we think of as cataclysmic may turn out to be just fine," he said.
- Bloomberg