Content
summary Summary

OpenAI has removed a section from its terms of service that explicitly prohibited the use of its technology for military purposes.

The previous policy explicitly prohibited the use of OpenAI technologies and ChatGPT for activities that pose a high risk to physical safety, including weapons development and military warfare.

The revised policy includes a clause prohibiting the use of the service to harm oneself or others, citing weapons development as an example. However, military and warfare are no longer explicitly mentioned.

The new policy after January 10th, 2024: Military and warfare are no longer explicitly mentioned.| Image: OpenAI (Screenshot)

In the previous version, "military and warfare" was explicitly excluded.

Ad
Ad
The old policy before January 10, 2024: Military and warfare are explicitly mentioned. | Image: OpenAI (screenshot)

The change is part of a larger revision to make the document clearer and easier to read, says OpenAI spokesperson Niko Felix in an email to The Intercept. The goal is to "create a set of universal principles that are both easy to remember and apply."

Felix would not comment on whether the ban on harming others generally excludes military use.

However, he emphasized that the rules include the military if it wants to use OpenAI technologies to develop or use weapons, harm others or destroy property, or engage in unauthorized activities that compromise the security of a service or system.

AI enters the battlefield

The implications of this policy change are unclear, but it comes at a time when the Pentagon and US intelligence agencies are showing a growing interest in AI technologies.

With this change, OpenAI could at least signal a willingness to talk and soften its previously strict stance, for example by opening the door to supporting military operational infrastructures. The military could potentially use a system like ChatGPT to generate text to speed up or automate the flow of information.

Recommendation

The use of AI in the military sector has been underway for years. In 2019, the US Pentagon announced in a strategy paper that AI would permeate all military domains in the future.

China's activities and the war in Ukraine are believed to have accelerated this development. Possible applications include autonomous drones, which are already being used as weapons in the Ukraine war, or propaganda using deepfakes.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has removed a clause from its terms of service that explicitly prohibited the use of its technology for military purposes.
  • The revised policy still prohibits using the service to harm people or develop weapons, but no longer explicitly mentions military and warfare purposes as it did before.
  • The change comes at a time when the Pentagon and U.S. intelligence agencies are showing growing interest in AI technologies. OpenAI's change could mean that the company is softening its previously strict stance.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.