OpenAI faces internal pushback after announcing a partnership with defense contractor Anduril. Some OpenAI employees express concerns about military applications of their AI technology.
OpenAI employees took to internal discussion forums Wednesday to voice their concerns after the Anduril partnership was announced. They called for more transparency and questioned the military applications of their AI systems.
Internal messages reviewed by the Washington Post show employees doubting whether AI use can truly be limited to defending against drone attacks. Staff members asked how OpenAI plans to prevent the technology from being used against manned aircraft or other military purposes.
One employee criticized the company for downplaying the implications of working with a weapons manufacturer, while another raised concerns about potential reputation damage. Some staff members did voice support for the partnership.
Management responds to concerns
The partnership will use Anduril's drone threat database to train OpenAI's AI models, aiming to improve US and allied forces' capabilities in detecting and defending against unmanned aerial systems.
OpenAI's leadership quickly responded to employees' concerns, emphasizing that the Anduril collaboration is focused solely on defensive systems. The line between offense and defense is blurry, however, and Anduril is also developing autonomous drones capable of lethal attacks.
In internal discussions, executives argued that providing advanced technology to democratically elected governments is crucial, noting that authoritarian states would pursue military AI regardless.
"We are proud to help keep safe the people who risk their lives to keep our families and our country safe," said OpenAI CEO Sam Altman. Some employees countered that the US also provides weapons to authoritarian allies.
The shift marks a significant change for OpenAI this year, opening up its technology to military use. OpenAI had restricted military use of its technology until January 2024, when it changed its policies to allow for certain military applications, such as cybersecurity.
The Washington Post notes that this reflects a broader trend of AI companies becoming more open to military applications of their technology.