• Sean Welsh is a doctoral candidate in robot ethics at the University of Canterbury.

Learning how to drive is an ongoing process as we adapt to new situations, new road rules and new technology, and learn from when things go wrong.

But how does a driverless car learn how to drive, especially when something goes wrong?

That's the question being asked of Uber after last month's crash in Arizona. Two of its engineers were inside when one of its autonomous vehicles spun 180 degrees and flipped on to its side.

Smack, spin, flip

The Tempe police report on the investigation into the crash, obtained by the EE Times, details what happened.


The report says that the Uber Volvo was moving south at 61km/h in a 64km/h zone when it collided with a Honda turning into a side street.

Knocked off course, the Uber Volvo hit the traffic light at the corner and then spun and flipped, damaging two other vehicles before sliding to a stop on its side.

Thankfully, no one was hurt. The police determined that the Honda driver "failed to yield" (give way) and issued a ticket. The Uber car was not at fault.

Questions, questions

But Mike Demler, an analyst with the Linley Group technology consultancy, told the EE Times the Uber car could have done better. Demler said Uber needs to explain why its vehicle proceeded through the intersection at just under the speed limit when it could "see" that traffic had come to a stop in the middle and left lanes.

But as Uber uses "deep learning" to control its autonomous cars, it's not clear that Uber could answer Demler's query even if it wanted to. In deep learning, the actual code that would make the decision not to slow down would be a complex state in a neural network, not a line of code prescribing a simple rule like "if vision is obstructed at intersection, slow down".

Debugging deep learning

The case raises a deep technical issue. How do you reduce the risk of autonomous cars getting smashed when humans driving alongside them make bad judgments?

Demler's point is that the Uber car had not "learned" to slow down as a prudent precautionary measure at an intersection with obstructed lines of sight. Most human drivers would naturally beware and slow down.

When it comes to deep reinforcement learning, this relies on "value functions" to evaluate states that result from the application of policies.

A value function is a number that evaluates a state - it can be like "ouch" for computers. Reinforcement learning gets its name from positive and negative reinforcement in psychology.

Until the Uber vehicle hits something and the value function of the deep learning, the Uber control system might not quantify the risk appropriately. Having now hit something it will, hopefully, have learned its lesson at the school of hard knocks.

Debugging formal logic

An alternative to deep learning is autonomous vehicles using explicitly stated rules expressed in formal logic. This is being developed by nuTonomy, which is running an autonomous taxi pilot in cooperation with authorities in Singapore.

NuTonomy's approach to controlling autonomous vehicles is based on a rules hierarchy. Top priority goes to rules such as "don't hit pedestrians", followed by "don't hit other vehicles" and "don't hit objects". Rules such as "give a comfortable ride" are the first to be broken when an emergency arises.

Key advantages of formal logic are provable correctness and relative ease of debugging. Debugging machine learning is trickier. On the other hand, with machine learning, you do not need to code complex hierarchies of rules.

Time will tell which is the better approach to driving lessons for driverless cars. For now, both systems still have much to learn.

- The Conversation