In an unannounced update to its usage policy, OpenAI has opened the door to military applications of its technology. While the policy previously prohibited the use of its products for “military and war” purposes, that language has now disappeared and OpenAI has not denied that it is now open to military use.
The Intercept first noticed the change, which appears to have gone live on January 10th.
Unannounced policy wording changes are common in the tech world as the products that govern their use continue to evolve and change, and OpenAI is apparently no exception. In fact, the company recently announced that its user-customizable GPT will be launched publicly alongside a vague monetization policy, which may require some changes.
But the change in non-military policy can hardly be said to be the result of this particular new product. Nor can it be claimed that excluding “military and war” is simply “clearer” or “more readable”, as OpenAI’s statement about the update does. This is a substantive change in policy, not a restatement of the same policy.
You can read the current usage policy here and the old usage policy here . Here’s a screenshot with the relevant parts highlighted:

Before policy change.

After the policy changes.
Clearly the entire thing has been rewritten, although whether it’s more readable is more a matter of taste. I happen to think that bulleted lists of explicitly prohibited practices are more readable than the more general guidelines they replace. But OpenAI’s policymakers clearly don’t think so, which is just a happy side effect if it gives them more latitude to interpret favorable or unfavorable practices, something that has so far been completely unallowable.
Although there is still a blanket ban on the development and use of weapons, as OpenAI representative Niko Felix explained, you can see that it was originally listed separately from “Military and Warfare.” After all, the military doesn’t just make weapons, but the weapons are made by others outside of the military.
It is where these categories do not overlap that I speculate that OpenAI is investigating new business opportunities. Not everything the Department of Defense does is strictly war-related; as any academic, engineer, or politician knows, the military establishment is deeply involved in a variety of basic research, investments, small business funding, and infrastructure support.
OpenAI’s GPT platform is useful for Army engineers who want to summarize decades of documentation of a region’s water infrastructure. How to define and manage relationships with government and military funding is a real dilemma for many companies. Google’s “Project Maven” famously took a step forward, although few seemed bothered by the multibillion-dollar JEDI cloud contract. Academic researchers who receive funding from the Air Force Research Laboratory may be able to use GPT-4, but researchers working on the same project within AFRL cannot. Where do you draw the line? Even a strict “no military” policy had to be discontinued after several troop withdrawals.
That said, completely removing “military and warfare” from OpenAI’s prohibited uses shows that the company is at least willing to serve military customers. I asked the company to confirm or deny this situation and warned them that the language of the new policy made it clear that anything other than a denial would be interpreted as a confirmation.
As of this writing, they have yet to respond. I will update this article if I receive a reply.
3 Comments
Pingback: OpenAI changes policy to allow military applications – Tech Empire Solutions
Pingback: OpenAI changes policy to allow military applications – Paxton Willson
Pingback: OpenAI changes policy to allow military applications – Mary Ashley