The robot revolution is coming.
And whether it's concern over autonomous weapons used for military purposes or increasingly sophisticated automation taking human jobs, there is no shortage of voices highlighting the challenges we face in building the best robotic companions to move society forward.
But while the "killer robot" headlines generate the most angst and attention, there are much more subtle challenges when it comes to the broad category of artificial intelligence that will continue to spread through society, reports News.com.au.
Google is just one of many organisations trying to tackle what it sees as the hidden threat of AI: biased data used to build machine-learning algorithms.
"The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," Google's AI chief John Giannandrea said before a recent Google conference on the relationship between humans and AI systems.
There is no end to human folly, and if we unwittingly pass that onto our AI machines, it will serve to further cement the subtle prejudices and bias of our communities.
Some experts warn that algorithmic bias is already pervasive in many industries.
As the MIT Technology Review points out, algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions.
In the US, algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.
In April researchers from Bath University in the UK demonstrated that algorithms are picking up deeply ingrained race and gender prejudices concealed within our patterns of language use.
"A lot of people are saying this is showing that AI is prejudiced. No. This is showing we're prejudiced and that AI is learning it," said Joanna Bryson, a computer scientist and the co-author of the study.
This week the Google owned company DeepMind, which focuses on artificial intelligence, announced the launch of an "ethics and society" unit to study the impact of new technologies on society.
You might remember the company for building the computer algorithm that beat the world's best Go player recently, an ancient Asian strategy board game considered to be deeply intuitive.
The announcement by the London-based group acquired by Google's parent Alphabet is the latest effort in the tech sector to ease concerns that robotics and artificial intelligence will veer out of human control. Or simply reflect the worst parts of us.
"As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work," said a blog post by DeepMind's Verity Harding and Sean Legassick announcing the launch.
"At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face."
The post said the focus would be on ensuring "truly beneficial and responsible" uses for artificial intelligence.
"If AI technologies are to serve society, they must be shaped by society's priorities and concerns," they wrote.
Google and DeepMind are members of an industry-founded Partnership of AI to Benefit People and Society which includes Facebook, Amazon, Microsoft and other tech firms.