On Tuesday, Google unveiled significant revisions to the guidelines that dictate its use of artificial intelligence and other advanced technologies. The company has eliminated language outlining its commitment to refrain from developing “technologies that cause or are likely to cause overall harm,” “weapons or technologies primarily intended to injure people,” “surveillance technologies that infringe upon internationally accepted norms,” and “technologies that violate widely recognized principles of international law and human rights.”
These updates were communicated through a note added to the top of a 2018 blog post that originally introduced the principles. “We’ve updated our AI Principles. Visit AI.Google for the latest,” the note stated.
In a blog entry released on Tuesday, two Google executives highlighted the growing adoption of AI, shifting standards, and global competitions in AI as factors prompting the need for an overhaul of Google’s principles.
Originally published in 2018 amid internal backlash over Google’s involvement in a US military drone contract, the principles sought to reassure concerned employees. In response to the protests, Google declined to extend the government contract and also laid out guidelines to steer the future applications of its advanced technologies, such as AI. Among other commitments, the principles stated that Google would not cultivate weapons, specific surveillance systems, or technologies that threaten human rights.
However, in Tuesday’s announcement, Google has rescinded those assurances. The updated webpage no longer specifies a list of prohibited applications for Google’s AI projects. Instead, the revised document allows Google greater flexibility in exploring potentially sensitive applications. It now asserts that the company will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user objectives, social responsibility, and widely accepted principles of international law and human rights.” Furthermore, Google has committed to working towards “mitigating unintended or adverse consequences.”
“We believe democracies should take the lead in AI development, guided by fundamental values like freedom, equality, and respect for human rights,” wrote James Manyika, Google’s senior vice president for research, technology, and society, alongside Demis Hassabis, CEO of Google DeepMind, the company’s esteemed AI research division. “We also believe that companies, governments, and organizations that share these principles should collaborate to create AI that safeguards individuals, fosters global progress, and bolsters national security.”
They further noted that Google will persist in prioritizing AI projects “that align with our mission, scientific focus, and expertise while adhering to widely accepted principles of international law and human rights.”
Several Google employees have voiced apprehension regarding these changes in discussions with WIRED. “It’s extremely concerning to witness Google abandon its commitment to the ethical deployment of AI technology without soliciting input from its workforce or the public at large, especially given long-standing employee sentiments against the company’s involvement in warfare,” stated Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.
Have a Tip?
Are you currently or have you previously been employed by Google? We’d like to hear from you. Using a nonwork device, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or paresh_dave@wired.com, or Caroline Haskins on Signal at +1 785-813-1084 or email carolinehaskins@gmail.com
With US President Donald Trump’s recent return to office last month, many corporations have been prompted to reconsider policies that promote equity and other progressive ideals. Google spokesperson Alex Krasov indicated that the changes have been in development for a considerably longer time.
Google has outlined its revamped objectives as pursuing ambitious, responsible, and collaborative AI initiatives. Terms such as “be socially beneficial” and “maintain scientific excellence” have been discarded, while an emphasis on “respecting intellectual property rights” has been added.