SCIENCE & TECHNOLOGY
Google has updated its AI ethics policy, removing previous commitments not to use artificial intelligence for weapons or surveillance that violate internationally accepted norms.
The California-based tech giant's previous "AI Principles" outlined a clear stance against pursuing AI technologies likely to cause harm, including those involving weapons and mass surveillance. However, the newly revised guidelines announced on Tuesday focus instead on developing AI "responsibly" and in accordance with "widely accepted principles of international law and human rights."
"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," stated Google DeepMind chief Demis Hassabis and research labs senior vice president James Manyika in a blog post. They emphasized the need for collaboration between companies, governments, and organizations to create AI that promotes global growth and supports national security.
History of Google's AI Ethics Stance
Google's initial stance on AI ethics emerged in 2018 after widespread employee protests over the company's involvement in Project Maven, a US Department of Defense initiative that leveraged AI for drone strike target identification. The backlash led Google to discontinue its contract with the Pentagon and declare it would not compete for a $10 billion cloud computing contract due to ethical concerns.
Policy Change Amid Political Developments
The updated policy coincides with Google's parent company Alphabet Inc.'s CEO Sundar Pichai attending the January 20 inauguration of former US President Donald Trump, alongside other tech leaders, including Amazon's Jeff Bezos and Meta's Mark Zuckerberg.
On his first day in office, Trump rescinded an executive order by former President Joe Biden that required AI companies to share safety test results with the government before public release. The revocation removed key safeguards for emerging AI technologies.
Industry Implications
The move by Google signals a shift in the tech industry’s approach to AI ethics, raising questions about the future development and deployment of AI-powered technologies in sensitive areas such as defense and surveillance.
As AI continues to evolve rapidly, stakeholders are watching closely to see how tech giants balance innovation, ethics, and national security concerns.