Disgruntled workers are walking off the job at Google in protest of the tech giant's role in a military project with the United States army.
The Silicon Valley behemoth — which is under fire in Australia over how it tracks customers and uses their data — is facing dissent from some inside the company over lending its artificial intelligence capability to the US drone program.
An internal petition calling for Google to stay out of "the business of war" was reportedly gaining support in the US, with some workers quitting to protest the collaboration, known as Project Maven.
The Pentagon is using Google's leading artificial intelligence technology to allow its drones to process and instantly recognise images.
The US Defense Department is looking to leverage Google's machine learning and engineering capability to distinguish people and objects in drone videos and flag certain objects for human review.
According to certain industry experts, "in some cases this would lead to subsequent missile strikes on those people or objects".
About 4000 Google employees were said to have signed a petition that began circulating about three months ago.
"We believe that Google should not be in the business of war," the petition reads, according to copies posted online.
"Therefore, we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
Tech publication Gizmodo reported this week that about a dozen Google employees are quitting in an ethical stand on the issue.
The International Committee for Robot Arms Control (ICRAC) was among those who have weighed in with support of the employee rebellion.
While Google indicated that AI findings would be reviewed by human analysts and would not be used for offensive missions, the technology could pave the way for automated targeting systems on armed drones, ICRAC said in an open letter of support.
"As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems," ICRAC said in the letter.
"We are then just a short step away from authorising autonomous drones to kill automatically, without human supervision or meaningful human control."
Google is not the first institution to cop major backlash for being involved in the ethically dubious area of autonomous weapons.
In April, Australian AI and robotics professor from UNSW, Toby Walsh, led a boycott of a top South Korean university due to concerns around the development of killer robots.
The boycott involved more than 50 of the world's leading artificial intelligence and robotics researchers from 30 different countries and came after the Korean university opened an AI weapons lab in collaboration with a major arms company that builds cluster munitions in contravention of UN bans.
For years Professor Walsh has been steadfast in his opposition to AI technology being applied to weapons systems, previously telling news.com.au "it would be a terrifying future if we allow ourselves to go down this road".
The Electronic Frontier Foundation (EFF) in the US was another group to welcome the internal Google debate, stressing the need for moral and ethical frameworks regarding the use of artificial intelligence in weaponry.
"The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety," wrote the EFF's Cindy Cohn and Peter Eckersley.
"Companies like Google, as well as their counterparts around the world, must consider the consequences and demand real accountability and standards of behaviour from the military agencies that seek their expertise."