Google is about to release its first smart speaker into NZ: the $89 Nest Mini, which will be available from June 25.
It's appealing on many levels, but will also reignite a privacy debate that's far from resolved.
Like Amazon's Echo (which has been available here for a while) or Apple's HomePod (yet to reach our shores), it responds to voice instructions.
• Expert reveals why you should never have Alexa in your bedroom
• Police think Amazon's Alexa may have information on a fatal stabbing case
• Facebook, Google and social media apps track you online. Protect yourself with these steps
You can use Google Assistant (Google's version of Siri) to ask your Nest Mini to play a song, give you the weather forecast, search for something on the internet and tell you the results or, with compatible smart home devices, control your lighting, security and heating.
The Nest Mini has a lot of smarts.
It will adjust the volume on a podcast if you turn your dishwasher on, for example.
If you have Nest Minis in different rooms, you can use them as an intercom just by saying, "Hey Google, call the kitchen."
Or you can get music to follow you around the house by saying: "Hey Google, move the music to living room speaker."
And, over time, it can learn to distinguish between different family members' voices.
It's that learning capability that will start to make some people nervous - plus the related topic of when your smart speaker is listening, and when it's not.
Google says the Nest Mini only listens after you say "Hey, Google" or "Okay Google" to wake up Google Assistant.
The voice commands you issue after saying one of those trigger phrases are recorded, the better to increase accuracy - but you can review and delete the recordings via the Google Home app.
Google Nest product manager Chris Chan also points out that the Google Nest Mini is one of the only smart speakers with a physical mute button (most others have to be muted via an app, which is more finicky).
Smart speakers caused headlines last year when it was revealed that Amazon, Apple and Google don't just use AI to analyse voice recordings but teams of human contractors.
All three said the aim of the exercise was to improve accuracy, and that people working on transcriptions were given anonymous audio.
A contractor to Apple told the Guardian in July 2019: "There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data."
Smart devices could be triggered by words that sounded like a wake-word. Sometimes even a sound, like a zip, could trigger a smart speaker to start recording.
Movements, such as lifting an Apple or Android watch or phone, can also trigger a voice assistant, and recording.
I used an Amazon Alexa for several months but got fed up with it being woken up by dialogue on our TV - and it was always hard to figure out what, since rewinding and replaying a programme failed to wake up Alexa again.
All I did was jump when Alexa was woken up by the telly. So I got off lightly next to the Portland, Oregon family who had Amazon's smart speaker send recordings of them to a random person on their contact list, without their knowledge (Amazon called it a rare occurrence and said it was taking steps to stop it happening again.)
Apple suspended the practice of letting contractors listen to Siri recordings last year, then added an opt-out feature to stop audio being shared with anyone at the company (on your Apple device, go to Settings > Privacy > Analytics and Improvements, then disable the "Improve Siri and Dictation" option).
If you use an Amazon smart speaker, disable recordings via the Alexa app via Settings > Alexa Privacy then disable "Help Improve Amazon Services and Develop New Features".
For a Google device, disable recording via myaccount.google.com/activitycontrols.
I asked Google who could listen to recordings if the feature wasn't disabled. A rep replied with the following points:
• We only transcribe audio snippets that we detect were directed to Google, not background conversations, noises, etc.
• Transcribers are instructed to discard personal information, such as bank account data.
• Transcription is not handled by Google employees. It is conducted by vendors that have been vetted through security and privacy reviews.
And the Google rep added the general comment that "Audio transcription is a standard
part of what makes any speech technology work - not specific to the Google Assistant - and helps improve accuracy, works for many languages and accents, etc. Traditionally, most machine-learning systems need human-labelled data, voice is no different.
"We're moving toward techniques that do not require human labelling [people transcribing recordings], which we're leading in research."
Privacy Commissioner John Edwards said his office has not fielded any complaints about smart speakers.
Edwards does recommend, however, that the owner of any device that listens - from a smart TV to a smart speaker to smartphone - familiarise themselves with its settings, including how to review or disable any recordings.
The Privacy Commissioner has given tech companies props for upping their game, but also said they fall short, in most instances, of what he calls "privacy by design" - or enabling maximum privacy by default then asking customers if they want to opt-in to more intrusive features.