Imagine working really hard on something actually, truly very clever and presenting it to the world, thinking everyone will fall over themselves in amazement.

Next, imagine people going "good heavens, what have you created? It's horrible! A monster!" when they're shown it.

This is what happened when Google showed off its latest and greatest artificial intelligence Duplex technology at its I/O developer conference.

Duplex is amazing, and it produced a very naturally sounding synthesised human voice and phrasing for Google's digital Assistant.

Advertisement

Too naturally sounding in fact, and there are now demands that Google's Assistant identifies itself as an AI so that humans aren't fooled.

Duplex is the result of many decades of work, by Google and many others, and it's hard to understand how people did not see that such a thing was coming.

That was always the aim of digitised imagery and audio, to be as life like as possible. The first iterations of voice synthesis that I tried out many years ago sounded comically bad, with bizarre American accents, and unmistakably computerish.

It was clear at the time that just like digital, now computational, photography, and even video games, that computer voices would become more realistic. WIth unlimited amount of computing power behind them for AI, they would be able to learn what you say and reply correctly - and to make their responses sound natural.

That's always been the developers' aim, because people won't use personal digital assistants (for instance) with grating, unnatural voices and phrasing.

In other words, if an ethical boundary was crossed it happened long ago and nobody minded very much.

This is not to say that we shouldn't worry about what naturally sounding and acting AI can do.

That's because of who we are, however, and not the technology per se. Humanity has an infallible ability to subvert and pervert the coolest technology, and use it to hurt each other with.

Unfortunately, it's all too easy to imagine how Duplex could be misused by robocallers and phone fraudsters who won't start off the conversations with a "you are talking to an AI" warning.

Think email spam, phishing, romance scamming and 419ing, except they'll arrive on your mobile phone.

More naturally sounding and behaving digital assistants backed by self-learning AI will make them more attractive to people, not less, so expect to speak to machines more often.

Machines listening to you and trying to interpret the meaning of your words is much more interesting than hearing devices speak to you. I've been trying out Amazon's Echo and Echo Dot devices, with the Alexa personal assistant; while to start with Alexa would misunderstand much of what I said, it's now scarily accurate and responds correctly (or it can't do this or that thing because it's not yet available in NZ, grr).

Alexa has also learnt to understand other people in my household too, but I can't help feeling uneasy at having the Echo device listening to everything we say, waiting to act on commands issued to it.

Part of it is people with personal digital assistants basically paying big companies to train their AIs and make them even better - and in some cases, this means doing humans out of jobs like answering phones.

What's more, it's not clear how safe digital assistants are. Researchers have already worked out how to hack audio signals so that you and I hear one thing - like music on YouTube - but AI-powered assistants hear commands.

That's a tribute to amazing voice recognition technology, but it could lead to scenarios such as "holy hell, Alexa, why did you buy all that stuff on Amazon and shipped it to Uzbekistan?" if bad people embed commands you can't hear into sound waves.

Take that scenario a bit further, and I think it's sound advice to not use Siri, Google Assistant, Alexa or Cortana in business and government situations. Not until the AI adds some further smarts to avoid being taken advantage of by the baddies that is. What a time to be alive, the age of AI.