Content
summary Summary

The U.S. would adhere to strict ethical guidelines, according to a U.S. Air Force general. But that should not be expected of a potential adversary, he said.

During an event at the Hudson Institute, Lt. Gen. Richard G. Moore Jr. was asked how the Pentagon views autonomous warfare. AI ethics is a topic of discussion at the highest levels of decision-making in the Defense Department, said the three-star Air Force general and deputy chief of staff for Air Force plans and programs.

“Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does,” Moore said. “And there are those that are willing to go for the ends regardless of what means have to be employed.”

What role AI will play in the future of warfare depends on "who plays by the rules of warfare and who doesn’t. There are societies that have a very different foundation than ours," he said.

Ad
Ad

"It is not anticipated to be the position of any potential adversary."

In a statement to the Washington Post, Moore said that while AI ethics may not be the United States’s sole province, it is unlikely that its adversaries would act according to the same values.

“The foundation of my comments was to explain that the Air Force is not going to allow AI to take actions, nor are we going to take actions on information provided by AI unless we can ensure that the information is in accordance with our values,” the Washington Post quoted Moore as saying. “While this may not be unique to our society, it is not anticipated to be the position of any potential adversary.”

Moore did not specify who this potential adversary might be. What is clear, however, is that China in particular is perceived by the United States as a potential adversary in an armed conflict in which AI plays a central role.

China also has reasons not to wage autonomous wars

But even though the ethical guidelines in both nations are very different, and historically come from very opposite places - in the U.S. often from Christian authors, and in China from Marxism-Leninism texts and Communist Party rules - both nations may ultimately have their reasons for not giving AI systems ultimate control over weapons systems.

"Once you turn over control of a weapons system to an algorithm, worst case, then the party loses control," Mark Metcalf, a lecturer at the University of Virginia and retired U.S. naval officer, said in an interview with The Washington Post.

Recommendation

Ethical standards as a disadvantage?

However, Moore's statements can also be interpreted in this direction: While refraining from fully autonomous warfare is morally imperative, it may prove to be a disadvantage in a full-scale war, for example in terms of response speed.

Even if the official doctrine of the U.S. armed forces is not to cross the threshold of full autonomy, the Pentagon must still find answers to an adversary that does not play by these rules. If successful, this would likely increase the chances that these principles will prevail in the long run, especially in the event of an actual conflict.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Lt. Gen. Richard G. Moore Jr. discussed the role of AI ethics in the Pentagon's decision-making processes, emphasizing the moral compass of the U.S. "Judeo-Christian society."
  • Moore explained that the Air Force would not allow AI to perform actions or make decisions based on AI information if it did not align with its values.
  • The debate over autonomous warfare raises questions about how the Pentagon will respond to adversaries who may not adhere to the same ethical guidelines.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.