Content
summary Summary

The tech giant removed previous restrictions on weapons and surveillance systems from its AI guidelines, joining other major AI companies working with defense contractors.

Ad

Google has rewritten its ethical guidelines for artificial intelligence. According to the Washington Post, the company has dropped its previous ban on using AI technology in weapons and surveillance systems.

The old guidelines specifically prohibited four types of applications: weapons, surveillance, technologies that "cause or are likely to cause overall harm," and anything that violates international law and human rights.

The new guidelines focus on using human oversight and testing to align with "widely accepted principles of international law and human right" while "minimizing unintended or harmful outcomes."

Ad
Ad

Google says this change reflects the growing global race for AI leadership. " We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," wrote Google's Head of AI Demis Hassabis and Senior Vice President James Manyika in a blog post.

Tech sector embraces defense partnerships

The shift marks a stark contrast from 2018, when thousands of Google employees protested military contracts, declaring that Google should not be in the business of war. Those objections now feel like ancient history.

This shift extends beyond Google. OpenAI recently partnered with defense contractor Anduril to develop AI-powered drone defense systems for the US military. Meta has given the US military access to its Llama AI models, while Anthropic is working with Palantir to provide US intelligence and defense agencies specialized versions of Claude through Amazon Web Services. Microsoft reportedly proposed using OpenAI's DALL-E image generator to develop military operations software for the US Department of Defense last year.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google has revised its ethical AI guidelines, now allowing the use of its AI technology for weapons and surveillance. The company cites global competition for AI leadership as the reason for this change.
  • The updated guidelines state that the technology should be used in accordance with principles of international law and human rights. Google says it aims to minimize unintended or harmful consequences through human supervision and testing.
  • Other AI laboratories, including OpenAI, Meta, Anthropic, and Microsoft, have also shifted their stance. They are now working with arms companies or making their AI technology available to the US military.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.