It happened - a self-driving car crashed and its driver was killed. In May this year, an electric Tesla Model S collided with a lorry in a fatal accident. This shows the technology behind self-driving cars is flawed and should be banned, right?

Before going that far, it's worth backing up a bit.

First, the Tesla wasn't driving itself, not fully. The electric car company's Autopilot ghost in the machine doesn't mean you shouldn't hold on to the steering wheel and pay attention to traffic and surrounds.

The system warns users that it can't handle everything and that drivers shouldn't trust it and should be prepared to take over when necessary.


That may not have happened according to some reports that say the driver of the Model S may have been watching a DVD while in charge of the car, with Autopilot enabled. Whether or not this is correct remains to be seen, but it could point to driver error and overconfidence in the system, which Tesla needs to call something else other than Autopilot as it's not fully autonomous.

The accident shows how hard it is to develop even fairly basic systems like the Autopilot.

Fully autonomous systems are years away, and it will take even longer before they're deemed trustworthy enough to ferry people around without human intervention. I'd love a car like that, but it would be hard to trust it.

Once we're used to autonomous cars, driving will be a very different kettle of fish compared to today. Nevertheless, cars becoming increasingly helpful opens up a can of worms, if they're lulling drivers into a false sense of security simply because they're better than people at piloting vehicles the vast majority of the time. When they make the wrong assumption though, and react inappropriately, or not at all, the consequences can be dire.

For a comparison on a very minor scale, I started using the self-parking system in my car hesitantly and gingerly.

If I don't think about it too much, I parallel park amazingly well. That happens maybe once in every 20 attempts, so most of the time I futz around back and forth.

Not so the parallel parking system: it slotted the car into spaces that I wouldn't have dreamed of trying myself.

There's an eerie feeling when you let go of the steering wheel and with millimetre precision, the car parks itself beautifully.

That is, until one day when the sensors got confused with a low curb that was covered in leaves and I scratched the rear wheel because I wasn't paying full attention and trusted the system too much.

There's the conundrum: for semi and fully autonomous driving systems to be truly useful and change the way we use cars, we need to learn to trust them.

At the same time, we shouldn't trust them, not fully, and somehow or other keep a constant watch, just in case there's an edge case the designers of the self-driving or even simpler systems hadn't thought of, and be prepared to intervene.

If you have been dozing off, looking things up on the internet, chances are you won't be very alert when it's time for action.

Sure, you may want to blame the car when an accident happens, but morally and ethically, if you could have stopped the mishap by paying attention and intervened, you should have.

And, you should be prepared to do so. How will someone used to only self-driving cars know how to react when things go wrong?

That's something of paradox and it's not clear how it will be resolved. It won't stop carmakers from working on self-driving cars, however. Tesla is updating its Autopilot, BMW has teamed up with chip giant Intel to build "robocars" and Volvo's been trialling autonomous car tech for a while now.

It'll be interesting to see what they come up with.

But for now, even if your car has amazing technological driving aids, look ahead and keep your hands and feet on the controls please.