Could robots change the way we think?
While that might seem the stuff of dark science fiction, New Zealand artificial intelligence (AI) experts say there's real fear that computer algorithms could hijack our language, and ultimately influence our views on products or politics.
"I would compare the situation with the subliminal advertising that was outlawed in the 1970s," said Associate Professor Christoph Bartneck, of Canterbury University's Human Interface Technology Laboratory, or HIT Lab.
"We are in a danger of repeating the exact same issue with the use of our language."
Bartneck has been working in the area with colleague Jurgen Brandstetter and other experts at the New Zealand Institute of Language Brain and Behaviour and Northwestern University in the United States.
Their project has investigated how language changes and evolves over time, and how robots and computers could influence not just the words we use, but our attitude toward those words.
Remarkably, the researchers showed it took only about 10 per cent of people to own a speech-enabled robot to completely dominate the usage of words.
One study involved pre-testing what word participants would normally use in a context, and then attempting to change this behaviour by consistently encouraging them to pick another word instead.
After the experiment, the researchers checked whether the participants had switched to using the alternative word, and also whether their view toward that word had changed.
"It did," Bartneck said.
"Given that this form of influence works in principle, it can be used by the companies that currently provide technology to influence consumers."
An Apple iPhone didn't "crash", he said, but stopped responding.
Because language developed dynamically, and everyone using it could change it, the impact of technology on language was a crucial consideration.
The Internet Of Things — connecting of our various devices, vehicles, appliances and other items to the web — will mean all of our voice-enabled devices can synchronise their vocabulary and, within seconds, push a certain word consistently and worldwide.
"Even the mass media cannot compete with this level of consistent usage of selected words."
Moreover, our relationship to our personal technology has become much stronger, making speech-enabled technologies all the more persuasive.
"We are much easier persuaded by a trusted close friend than, let's say, a car salesperson or a politician," he said.
"We have learned to have a certain amount of scepticism towards the latter but we are still vulnerable towards personal technology."
Applications that we use to control our phones, homes or shopping tours, like Siri, Cortana or Bixby, could in turn influence our attitudes towards concepts, political ideas and products with what psychologists termed the "mere-exposure effect".
"It makes a great difference if your smart shopping agent proposes to purchase an 'energy drink' compared to offering a 'fizzy drink'," he said.
"The question really is who gets to decide what words our artificial counterparts use."
Even the mass media cannot compete with this level of consistent usage of selected words.
There had already been much discussion between experts over large-scale experiments that had already been carried out, including secret psychological tests on nearly 700,000 Facebook users that forced an apology from the social media giant.
"Google, for example, has almost a monopoly in the search engine market and they are conducting studies to understand human behaviour.
"Once understood, they change their secret sauce to present better search results to us.
"But what are better results?"
'We need to be cautious'
Tech commentator Peter Griffin, who recently returned from the International Robotics Exhibition in Tokyo, said most AI researchers he spoke with were simply trying to steer machine-learning toward useful things, such as scanning thousands of medical images to make diagnoses as or more accurate than human doctors could.
"But a subset of them are already thinking about the implications of the technology and there are a few research collaborations underway focused on how to do this stuff safely," Griffin said.
"The likes of Elon Musk and his Open AI partners believe that the exponential nature of technological change means the power of AI systems will advance rapidly in the next couple of decades to the point where they protect themselves from us turning them off, with potentially disastrous consequences.
"It is hard to imagine that now, but given the pace of change, we just don't know what game-changing developments in AI might accelerate us to that potential outcome, so we need to be cautious."
More pressing, he said, was the need to quickly address bias in algorithms, the outsourcing of the moral hazard of making decisions about people's lives to machines, and making sure AI couldn't be hacked for nefarious ends.
"As is often the way when technology outpaces regulation, I'm not seeing really coherent policies at a national or industry level to deal with the rapid developments in AI.
"But already, many of the algorithms that dictate aspects of our lives, such as who we match with on a dating site or what stocks a sharebroker buys on our behalf, are essentially a black box to us.
"We have no idea how they are designed and why.
"There needs to be more transparency around that and when it comes to AI more broadly, we need to see a strategy for its responsible development and exploitation and, where necessary, updating of laws to deal with it."
Bartneck was unaware of any regulations or policies currently in place, but believed a better approach might be to regulate the development of our language only to the degree that it should be left to its natural flow of change.
We have no idea how they are designed and why.
"With powerful tools at our fingertips we need to ensure that no company or Government influences our language without our consent," he said.
"Much good can be done and most campaigns rely on the principles of advertisement to work for their success."
The All Right? campaign, for instance, used marketing to spread awareness of mental health issues.
"Trying to change the behaviour of people is in itself not necessarily unethical, but we need to be aware of it."