close
Technology

Military Strategy and Judgment cannot be based on AI, according to study

AI can certainly play a role in supporting military decision-making, but it is true that it cannot replace human judgment and strategic thinking. A study published in the Journal of Defense Management suggests that the use of AI in military operations may have limitations when it comes to complex strategic decision-making.

Using artificial intelligence (AI) for warfare has long been a promise of science fiction and politicians, but new research from the Georgia Institute of Technology contends that only so much can be automated and demonstrates the value of human judgment.

“All of the hard problems in AI are really judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war,” said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. “You need human sense-making and moral, ethical, and intellectual decisions in an incredibly confusing, fraught, and frightening situation.”

Data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action are the four key components of AI decision-making. Machine learning advancements have made predictions easier, increasing the value of data and judgment. Although AI can automate everything from commerce to transportation, human judgment is required. Lindsay and University of Toronto Professor Avi Goldfarb published a paper titled “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War” in International Security.

Machines are good at prediction, but they rely on data and judgment, and the most difficult problems in warfare are information and strategy. Because of its unpredictability, the conditions that make AI work in commerce are the conditions that are most difficult to meet in a military environment.

Jon Lindsay

Many policymakers assume human soldiers could be replaced with automated systems, ideally making militaries less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb state that AI should not be seen as a substitute, but rather a complement to existing human strategy.

“Machines are good at prediction, but they rely on data and judgment, and the most difficult problems in warfare are information and strategy,” he explained. “Because of its unpredictability, the conditions that make AI work in commerce are the conditions that are most difficult to meet in a military environment.”

Lindsay and Goldfarb cite Rio Tinto as an example of a company that uses self-driving trucks to transport materials, lowering costs and risks for human drivers. Unless there are road closures or obstacles, there is an abundance of data traffic patterns and maps that require little human intervention.

Military cannot rely on AI for strategy or judgment, study suggests

War, on the other hand, often lacks abundant unbiased data, and judgments about objectives and values are inherently contentious, but that doesn’t make it impossible. According to the researchers, AI would be best used on a task-by-task basis in bureaucratically stable environments.

“All the excitement and fear are about killer robots and lethal vehicles, but the worst case for military AI in practice will be classically militaristic problems where you’re really reliant on creativity and interpretation,” Lindsay explained. “However, we should focus on personnel systems, administration, logistics, and repairs.”

According to the researchers, using AI has consequences for both the military and its adversaries. If humans are central to deciding when to use AI in warfare, then military leadership structure and hierarchies may shift depending on who is in charge of designing and cleaning data systems and making policy decisions. This also implies that adversaries will seek to compromise both data and judgment, as both will have a significant impact on the war’s trajectory. Competing against AI may encourage adversaries to manipulate or disrupt data, making sound judgment even more difficult. Human intervention will be even more necessary as a result.

Yet this is just the start of the argument and innovations.

“If AI is automating prediction, that’s making judgment and data really important,” Lindsay said. “We’ve already automated a lot of military action with mechanized forces and precision weapons, then we automated data collection with intelligence satellites and sensors, and now we’re automating prediction with AI. So, when are we going to automate judgment, or are there components of judgment cannot be automated?”

Overall, it is important to strike a balance between the use of AI and human judgment in military operations. AI can be a powerful tool, but it is not a replacement for human decision-making and strategic thinking.

Topic : News