Debate over how to test AI is intensifying after a series of high-profile accidents. An "autopilot" system developed by the Silicon Valley electric vehicle specialist Tesla is under investigation by US safety authorities after two fatal crashes, and a trial of a self-driving system by Uber in Arizona has been suspended after a Volvo fitted with its technology car hit and killed a pedestrian in March.
Hassabis suggested the nature of AI technology makes it unsuitable for such "safety critical" systems.
He said: "We roughly know what it is doing but not specifically 'this bit of code is doing this'. And for safety critical systems like healthcare or to control a plane you would want to know why a decision was made, so you could track back for accountability."
The 41-year-old has made several major breakthroughs in AI, most famously leading the development of AlphaGo, software that defeated the best human players of the Chinese board game Go.
Hassabis said that fears about "killer robots" were overblown but that current driverless car programmes may be putting people at risk.
The comments set him at odds with Elon Musk, Tesla's founder, who has branded critics of autopilot "irresponsible". Google also has a driverless car programme, Waymo, which is currently testing self-driving cars in the US.
Hassabis said he hoped DeepMind would become an "ethical beacon" in AI. When Google swooped, DeepMind faced criticism for succumbing to the lure of Silicon Valley cash. It promised to set up an ethical board and maintain its independence.