Content
summary Summary

Killer robots or AI helpers: on artificial intelligence in warfare and how it is changing war.

There are four major application areas for AI technology in the military: logistics, reconnaissance, cyberspace, and warfare.

In the first three areas, advanced AI applications are already in use or being tested. AI is helping to optimize logistics chains, predict needed maintenance, find vulnerabilities in software, and combine vast amounts of data into actionable information.

Artificial intelligence is therefore already having an impact on military operations. But the fighting itself is still primarily carried out by humans.

Ad
Ad

The third revolution in warfare

A harbinger of AI-assisted warfare is the growing number of remotely piloted drones in conflict zones around the world: between 2009 and 2017, the number of American soldiers in combat decreased by 90 percent and the number of U.S. drone strikes increased tenfold. Today, U.S., Russian, Israeli, Chinese, Iranian, and Turkish drones are flying attacks in the Middle East, the African continent, Southeast Asia, and Europe.

Fully autonomous drones that autonomously identify and attack their targets are a realistic possibility, and according to a UN report, they may have already been deployed.

Eine MQ-9 Reaper Drohne mit einer Luft-Luft-Rakete
The U.S. Air Force is testing the MQ-9 Reaper drone, proven for reconnaissance, for air-to-air combat and missile defense. | Image: USAF

Such systems are an example of lethal autonomous weapons systems ("LAWS"). There are international efforts to regulate them heavily or ban them altogether. However, because they can make or break a war, the major military powers in particular are reluctant to ban them.

Autonomous weapons are considered the third revolution in warfare after the invention of the atomic bomb and gunpowder. They have the same ability to change the balance of power.

Abandoning the use of advanced AI technology in weapons systems is akin to abandoning electricity and internal combustion engines, says Paul Scharre, a former soldier, consultant to the U.S. Department of Defense, and author of "Army of None: Autonomous Weapons and the Future of War".

Recommendation

AI in war: autonomy in three stages and loitering munitions

Not all autonomous weapons systems are dystopian killer robots. The autonomy of weapon systems can be roughly divided into three levels:

  • Semi-autonomous weapon systems (human in the loop)
  • Human-supervised autonomous weapon systems (human on the loop)
  • Fully autonomous weapon systems (human out of the loop)

An example of semi-autonomous weapon systems are "fire and forget" missiles that independently attack a previously designated target after being fired by a human. This allows pilots to attack multiple targets in quick succession. The missiles are used by militaries worldwide to engage air and ground targets.

Human-monitored autonomous weapons systems have traditionally been more defensive in nature and are used wherever human reaction time cannot keep up with the speed of battle.

Ein Aegis-Operator auf der USS Mustin
The Aegis system can be adapted to a captain's requirements and is operated by specialists. In the event of an attack, an automated defense can be activated. | Image: US Navy

Once activated by a human, they independently attack targets - but under constant human supervision. Examples include the Aegis combat system used on Navy ships, which independently attacks missiles, helicopters, and aircraft once activated, or the Patriot and Iron Dome missile defense systems. More than 30 countries are already using such systems, says Scharre.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

But that has changed with the development of a new class of weapons called "loitering munitions." These warhead-loaded aerial drones have autonomous capabilities and are programmed by a human to attack specific targets. This attack can be aborted by the human. They can provide air support to troops without putting fighter planes or helicopters at risk.

Such drones blur the line between supervised and fully autonomous weapons systems and have been in use for at least a decade. Widely used systems include the Israeli Harop, the American Switchblade, the Russian Lancet, and the Iranian Shahed, for example. Their recent impact in the Armenia–Azerbaijan clashes and the Ukraine war has led some military experts to see the degree of autonomy made possible by modern technologies as part of deterrence.

For example, Admiral Lee Hsi-ming, former Taiwanese chief of general staff, former vice minister of national defense, and commander of the Taiwanese navy, sees loitering munitions as elemental to Taiwan's military ability to deter a possible Chinese war of conquest.

The state of autonomous war machines

No military officially operates fully autonomous weapons systems to date. A fully autonomous war is (yet) only a dystopia of AI-assisted warfare.

From a technical perspective, the fact that such systems are not yet being widely deployed has one main reason: the required AI technology does not yet exist. The boom in machine learning over the past decade has produced countless advances in AI research, but current AI systems are not suitable for professional military use.

In theory, they promise precision, reliability, and high-speed response. But in practice, they still fail due to the complexity of the real world.

Current AI systems often don't understand context, can't cope reliably with changing circumstances, are vulnerable to attack, and certainly aren't suited to making ethical life-and-death decisions. For the same reasons, despite massive investments and big promises, autonomous cars are still not widely driving on our roads.

While NATO and the U.S. have expressed support for the development and deployment of autonomous weapon systems, they do not want to go beyond supervised autonomous weapon systems - humans should remain in control, which also requires reliable AI systems. The U.S. military's research arm, DARPA, is funding billions of dollars in related developments.

But what exactly is control? It's not always clear where exactly the line is drawn - is it enough for a human to launch a weapons system that then kills on its own? Does he need to be able to turn it off again? What about situations where human decision-making speed is no longer sufficient?

Human-machine cooperation in the air

Currently, the focus of the military and defense contractors is primarily on fusing various sensor data and developing systems that cooperate with humans. The U.S. military's focus is on cognitive assistance in joint warfare, said former JAIC chief Nan Mulchandani in 2020.

Some of these systems are designed to fly, drive or dive, gather intelligence, attack designated targets on their own or deliver supplies. But they always get their missions, targets, and clearances from a human.

The U.S. Air Force, for example, tested variants of Kratos' XQ-58A as part of the Skyborg Program. The stealth drones are supposed to be inexpensive and fly alongside a human pilot, taking orders from him while providing a supporting reconnaissance and weapons platform. The program has been classified since 2021, but up to a dozen of the drones are expected to be operational by spring 2023. Meanwhile, the U.S. Navy is developing autonomous tanker aircraft based on the MQ-25A Stingray drone.

Boeing has also developed a Loyal Wingman drone and is selling it to the Australian Air Force (RUAF). The Russian Air Force, on the other hand, relies on the larger S-70 Okhotnik drone, and the Chinese Air Force bets on the FH-97A.

In combat, these drones are to be controlled by a human pilot from a Next-Generation Fighter (NGF). He in turn will be supported on board by an AI co-pilot. This reduces latencies in communication.

In Europe, development of an autonomous drone as a remotely controlled (RC) carrier is planned under the Next Generation Weapon System (NGWS) program of the NGF program of France, Germany and Spain. A second program in Europe is called Tempest and is funded by England, Italy and Japan.

AI drones in water and on land

Drones are also expected to assist humans in the water: Examples include semi-autonomous vessels such as the U.S. Navy destroyer Sea Hunter, Boeing's Orca submarine, and the simple Ukrainian drone boats attacking the Russian Navy Black Sea Fleet.

For use on the ground, defense contractors are developing various weapons such as the Ripsaw M5 combat drone, designed to accompany U.S. Army tanks, and the Russian Uranium-9 tank, which has already been used - arguably ineffectively - in Syria. The U.S. infantry is operating with tiny reconnaissance drones with thermal imaging cameras, and the U.S. Air Force is testing the semi-autonomous robotic dog from Ghost Robotics.

US-Soldat mit Black Hornet Drohne
The U.S. Army is relying on the tiny Black Hornet drone for rapid reconnaissance by infantry. | Image: DoD

The Ukraine War also demonstrates the central role of reconnaissance by drone and communication between the drone operator, artillery, and infantry. The precision gained in this way allowed Ukraine to effectively halt the Russian advance.

Low-flying and low-cost reconnaissance are still done via human eyes, but Ukraine is training neural networks with the available footage, according to a Ukrainian drone commander. This allows the drones to automatically detect Russian soldiers and vehicles and dramatically speed up the OODA loop (Observe, Orient, Decide, Act).

Artificial intelligence in cyberspace and future conflicts

Away from real-world warfare, AI is increasingly being used in cyberspace. There, it can help detect malware or identify patterns in cyberattacks on critical infrastructure.

In late 2022, NATO tested AI for cyber defense: six teams were tasked with setting up computer systems and power grids in a fictitious military base and keeping them running during a simulated cyberattack.

Three of the teams were provided access to a prototype Autonomous Intelligence Cyberdefense Agent (AICA) developed by the U.S. Department of Energy's Argonne National Laboratory. The test showed that AICA helped better understand and protect relationships between attack patterns, network traffic, and target systems, according to cybersecurity expert Benjamin Blakely, who co-led the experiment.

Whether it's cybersecurity, cognitive assistance, sensor fusion, loitering munitions, or armed robotic dogs, artificial intelligence is already changing the battlefield. The effects will increase in the coming years as advances in robotics, world model development, or AI-powered materials science and manufacturing techniques enable new weapons systems.

LAWS are also likely to be a part of that future, at least that is what a regulatory proposal titled "Principles and Good Practices on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems"(download) suggests. It was submitted to the UN in March 2022 by Australia, Canada, Japan, the Republic of Korea, the United Kingdom, and the United States.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Artificial intelligence can be used in the military in four areas: Logistics, reconnaissance, cyberspace and warfare.
  • Numerous countries are developing or actively using AI systems for the military. Most recently, in the Ukraine war.
  • Autonomous weapons are considered the third revolution in warfare after the invention of gunpowder and the atomic bomb.
  • Their use is limited partly by the current state of the art and partly by ethical guidelines, but could increase in the future.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.