"AI is no different from the climate," Pichai said. "You can't get safety by having one country or a set of countries working on it. You need a global framework."
Current frameworks to regulate the technology in the U.S. and Europe are a "great start," and countries will have to work together on international agreements, similar to the Paris climate accord, to ensure it's developed responsibly, Pichai said.
Technology such as facial recognition can be used for good, such as finding missing people, or have "negative consequences," such as mass surveillance, he said.
Keith Enright, Google's chief privacy officer, also spoke about the potential of artificial intelligence and machine learning to continue developing new technologies and services using a minimum amount of customer data.
"We're right now really focused on doing more with less data," Enright said at a data-protection conference in Brussels on Wednesday. "This is counter-intuitive to a lot of people, because the popular narrative is that companies like ours are trying to amass as much data as possible."
Holding on to data that isn't delivering value for users is "a risk," he said.
Powerful new European Union rules took effect across in May, giving privacy watchdogs the power to fine companies as much as 4% of annual global sales for serious violations. Google has come under scrutiny many times in Europe, with one probe in France resulting in a 50 million euro ($55 million) fine under the new law.
Pichai had also stopped by Brussels on his way to Davos, giving a rare public speech, where he called on regulators to coordinate their approaches to artificial intelligence. The European Union is set to unveil new rules AI developers in "high risk sectors," such as health care and transportation, according to an early draft obtained by Bloomberg.