Content
summary Summary

OpenAI faces internal pushback after announcing a partnership with defense contractor Anduril. Some OpenAI employees express concerns about military applications of their AI technology.

Ad

OpenAI employees took to internal discussion forums Wednesday to voice their concerns after the Anduril partnership was announced. They called for more transparency and questioned the military applications of their AI systems.

Internal messages reviewed by the Washington Post show employees doubting whether AI use can truly be limited to defending against drone attacks. Staff members asked how OpenAI plans to prevent the technology from being used against manned aircraft or other military purposes.

One employee criticized the company for downplaying the implications of working with a weapons manufacturer, while another raised concerns about potential reputation damage. Some staff members did voice support for the partnership.

Ad
Ad

Management responds to concerns

The partnership will use Anduril's drone threat database to train OpenAI's AI models, aiming to improve US and allied forces' capabilities in detecting and defending against unmanned aerial systems.

OpenAI's leadership quickly responded to employees' concerns, emphasizing that the Anduril collaboration is focused solely on defensive systems. The line between offense and defense is blurry, however, and Anduril is also developing autonomous drones capable of lethal attacks.

In internal discussions, executives argued that providing advanced technology to democratically elected governments is crucial, noting that authoritarian states would pursue military AI regardless.

"We are proud to help keep safe the people who risk their lives to keep our families and our country safe," said OpenAI CEO Sam Altman. Some employees countered that the US also provides weapons to authoritarian allies.

The shift marks a significant change for OpenAI this year, opening up its technology to military use. OpenAI had restricted military use of its technology until January 2024, when it changed its policies to allow for certain military applications, such as cybersecurity.

Recommendation

The Washington Post notes that this reflects a broader trend of AI companies becoming more open to military applications of their technology.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Some OpenAI employees have expressed concerns about the company's partnership with defense contractor Anduril, questioning the military applications of their AI technology and the potential impact on OpenAI's reputation.
  • The partnership aims to use Anduril's drone threat database to train OpenAI's AI models, improving the ability of US and allied forces to detect and defend against unmanned aerial systems, but some employees doubt whether the technology's use can be limited to defensive purposes.
  • OpenAI's leadership has responded to staff concerns, arguing that providing advanced technology to democratic governments is crucial, as authoritarian states would pursue military AI regardless.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.