Self-driving cars are touted as our chariots of the future - but before we hop in one, Kiwi researchers say we need to precisely clear of what the risks are.
And it's in communicating those potential risks to people where their new study has found a problem.
"There's a certain amount of risk involved when you step into an autonomous vehicle," University of Canterbury robotics expert Associate Professor Christoph Bartneck explained.
"These machines are not perfect, they will fail and they will hurt people. People have already died in incidents involving autonomous vehicles as a consequence of how they were programmed."
One of the most widely-publicised deaths was that of US woman Elaine Herzberg, who was struck by an Uber car in self-driving mode as she was crossing an Arizona street.
While New Zealand has been promoted internationally as a "test-bed" for the technologies, the Ministry of Transport and the NZ Transport Agency are still reviewing transport legislation to clarify the legality of testing driverless cars here.
And although there are no cars on our roads operating at autonomous levels 4 and 5, Christchurch Airport has a driverless shuttle designed by Ohmio Automation for public use in restricted areas.
Bartneck, of the university's Human Interface Technology Lab, or HIT Lab NZ, said manufacturers had a responsibility to clearly outline what the risks and uncertainties were when it came to using the technology.
"Communicating risk and uncertainty is one of the most challenging science communication tasks because it's based on advanced mathematical concepts, which people often struggle to understand."
Other scientific fields, such as medicine, faced the same challenges, added Bartneck's colleague, Professor Elena Moltchanova.
"Doctors have to explain to patients that particular treatments have a certain probability of succeeding or failing, and that those treatments come with a certain risk of side-effects," she said.
"A lot of research has been done in the medical space around communicating risk so they're much further along than we are in human-robot Interaction."
In a just-published study, Moltchanova and Bartneck investigated using different phrases and words to communicate risk and uncertainty surrounding driverless cars.
Participants were presented with a random series of sentences, including a word or phrase such as "probably", "likely", and "almost no chance" to describe how likely a situation was to happen.
Next, they were asked to choose a percentage of probability that they believed matched that word.
"At each end of the scale there was a strong correlation; participants matched 'highly likely' with 100 per cent and 'almost no chance' with zero per cent," Moltchanova said.
"In the middle, around 60 to 80 per cent, things were less clear with words and phrases being used interchangeably."
Bartneck said that, ideally, both words and numbers should be used when giving uncertainties about the vehicles.
"We [also] need to tell people what the probability of an event happening is and how certain we are about that. These are two different things, but people struggle to separate them."
The researchers concluded that, until driverless cars could match the performance of humans, people using them should be aware of and agree to the risks.
"It will never be perfect, but we must do what we can to communicate uncertainty about technology such as autonomous vehicles as best as we can, because the consequences of not understanding are so dangerous."