top of page

Artificial Intelligence in Conflicts: A Reconsideration of Human Intervention in Military Decisions


The Presidential Statement and the Ethical Debate

In his speech for the 164th anniversary of the establishment of the Italian Army, the Italian President Sergio Mattarella addressed the topic of technological evolution and artificial intelligence in the military sphere, expressing a cautious stance on the decision-making autonomy of automated systems.


The President stated: “It would be a serious mistake to imagine that in an increasingly cybernetic, robotized world, equipped with artificial intelligence, we can do without human awareness, the capacity for discernment, the courage to act, feelings such as altruism and solidarity, creativity, and everything else that belongs solely to the human being. Technological evolution does not erase the reasons of ethics based on respect for human dignity. It is a matter of adapting the former to the latter, of avoiding handing over to AI-equipped weapons systems the evaluation and choice regarding the life or death of people.”


It is important to note that Mattarella himself emphasized the role of philosophy as a fundamental element in military ethics. In his address, the President acknowledged the importance of the “study of philosophy introduced in all military training institutes,” precisely to help soldiers in “this work of growth” which must provide “soldiers with a coat of values” and “the ability to discern what is good and what is evil.” This emphasis on the philosophical dimension highlights how even Mattarella recognizes the importance of a systematic and rational approach to ethical issues, which could theoretically also be incorporated into artificial intelligence systems.


This position, while understandable from a humanistic perspective, deserves a critical analysis in light of technological advances and historical evidence from conflicts.


The Limits of Human Decision-Making in War Contexts


It is important to consider that historical examples of military errors date back to times when modern technology and AI systems were not available or not used. Nonetheless, these examples demonstrate intrinsic limitations of human decision-making processes that could theoretically be overcome by advanced technological systems.


During World War II, the Italian army made a series of significant errors with tragic consequences. The invasion of Greece in October 1940 is an emblematic case: troops were sent with inadequate winter equipment into the mountains of Epirus. An artificial intelligence system, analyzing historical weather data and terrain characteristics, could have accurately predicted logistical needs, avoiding a situation in which soldiers found themselves without proper clothing or support, leading to German intervention.


The defense of Sicily during Operation Husky in 1943 shows another failure of human judgment. Despite the predictability of the Allied attack, Italian coastal units were ineffectively distributed, poorly armed, and placed in non-strategic locations. An AI system could have analyzed vulnerabilities along the Sicilian coast, identified the most probable landing points based on geographic factors, and optimized the placement of limited resources in strategically relevant areas.


The catastrophic handling of the armistice on September 8, 1943, left hundreds of thousands of soldiers without clear orders, in a communicative chaos where many units received contradictory information or none at all. An AI system could have ensured communications were consistent, traceable, and efficiently distributed to all units, maintaining the continuity of the command chain even in a crisis.


In more recent times, examples like the attack on the Mariupol theatre in March 2022, where the structure was clearly marked with the word “CHILDREN” in Russian visible from above, show human failure in recognizing or respecting evident signs of civilian presence. Advanced AI-based visual recognition systems could more reliably identify such signals, automatically preventing attacks on protected structures.


The bombings in densely populated areas of Gaza during the 2023–2024 conflict, with a high percentage of civilian casualties, highlight the human inability to accurately distinguish between combatants and civilians in complex urban environments—a challenge worsened by combat stress and potential decision-making biases. AI systems could analyze multiple data sources in real time to more accurately distinguish between legitimate military targets and civilian populations, potentially reducing collateral damage in future conflicts.



The Mathematical and Systemic Approach to Military Strategy

As rightly suggested, military strategy is essentially about optimizing functions under uncertainty—a domain in which artificial intelligence excels. Alan Turing’s pioneering work during World War II in decrypting the Enigma code demonstrates how computational approaches can surpass human capabilities in critical military contexts.


Stuart Russell, professor of computer science at the University of Berkeley and a leading authority on AI, has emphasized that “autonomous systems could potentially reduce civilian casualties in conflicts through better target identification and greater accuracy than human operators, especially under conditions of stress or fatigue.”


AI systems have shown superior capabilities in many complex decision-making domains:


  1. Data processing at scale: AI systems can simultaneously analyze thousands of intelligence sources in real time, exceeding human cognitive limits.

  2. Impartiality: They are not subject to emotional bias, fear, revenge, or ethnic prejudices that often influence human decisions in war zones.

  3. Consistency: They apply rules of engagement coherently, unaffected by fatigue or combat stress.

  4. Decision speed: In scenarios where milliseconds count—such as missile defense—AI can react faster than any human operator.


Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology, argues that “autonomous robots could be designed to follow the laws of war and rules of engagement more strictly than humans, potentially reducing atrocities on the battlefield.”


It’s worth noting that primitive forms of autonomous systems are already in use. Israel’s Iron Dome system, which has intercepted thousands of rockets targeting civilian areas, uses advanced algorithms to determine which projectiles pose a real threat to populated zones and should be intercepted. This system has demonstrated effectiveness unattainable by any human operator given the required reaction times.


In Ukraine, Western air defense systems such as NASAMS use sophisticated algorithms to identify and classify aerial threats. These systems, while keeping a human in the decision loop, delegate much of the analysis and identification to computerized systems precisely because they outperform human capabilities in terms of speed and accuracy.


Toward a Balanced Future: The Ethical Integration of Artificial Intelligence in War Contexts

The analysis of historical and recent examples demonstrates that the issue of artificial intelligence in military decisions deserves deeper reflection than a simple opposition between technology and humanity suggests.


The Real Potential of Military AI

Advanced AI systems offer capabilities that could significantly improve the conduct of military operations. They can simultaneously analyze extremely complex and dynamic scenarios, processing vast streams of intelligence, satellite imagery, and urban risk maps with a level of precision unattainable by human analysis. The real-time reaction speed—crucial in areas like missile defense or targeted strikes—outpaces human abilities by orders of magnitude.


However, it’s critical to acknowledge that these systems depend heavily on reliable data and accurate training models. In wartime contexts, data may be incomplete, deliberately falsified, or inherently ambiguous, posing significant challenges to system effectiveness.


The Limits of Current AI

Contemporary artificial intelligence technologies present substantial limitations that cannot be ignored. The “black box problem” remains a core challenge: the decisions of a deep neural network may be opaque even to its creators, raising concerns about the verifiability and reliability of such systems in high-risk environments. The issue of legal accountability remains unresolved: who is responsible in the case of an error? The system’s designer? The military commander who delegated the decision? Moreover, the problem of biases in training data risks perpetuating or amplifying preexisting cultural or geopolitical prejudices.


The thesis that the “moral superiority” of human judgment in conflicts is in question represents a bold but justified statement in light of historical evidence. Military history clearly demonstrates that humans are by no means immune from immoral, inefficient, or catastrophic decisions. However, it is essential to recognize that human morality is evolutionary and contextual, rooted in social and cultural experience that AI systems, lacking consciousness or genuine empathy, do not possess.


The most realistic and desirable goal, as theorized by scholars like Arkin and Asaro, is a complementary integration between human and technological capabilities. In this model, artificial intelligence assists but does not replace human judgment—especially in decisions with lethal implications. The human being remains the ultimate custodian of the moral and legal responsibility for actions taken, while embedded ethical systems must be designed with transparent and verifiable criteria.


President Mattarella’s position, though rooted in a legitimate concern for the centrality of the human in ethical decisions, could benefit from a more nuanced understanding of the relationship between artificial intelligence and moral judgment in wartime contexts.


Real ethical progress lies in the ability to develop systems in which artificial intelligence can enhance the best qualities of human judgment—such as contextual understanding and empathy—while mitigating structural weaknesses like cognitive bias and perceptual limits. Only through this complementary integration can we hope to reduce the tragedies that have characterized armed conflicts throughout the centuries.


The challenge for the future is not simply technological, but profoundly philosophical and cultural: how to build artificial intelligence systems that incorporate our highest ethical values while recognizing and compensating for the intrinsic limits of human cognition. This is the true frontier of military ethics in the age of artificial intelligence.


History and Military Conflicts

– Beevor, A. (2012). The Second World War. Little, Brown.– Del Boca, A. (2007). Mussolini’s Gas: Fascism and the War in Ethiopia. Editori Riuniti.– Hastings, M. (2004). Armageddon: The Battle for Germany, 1944-1945. Knopf.– Knox, M., & Roberts, W. M. (1997). The Italian Army and the Second World War. Cambridge University Press.– Mack Smith, D. (1997). Italy and Its Monarchy. Yale University Press.– Rodogno, D. (2006). Fascism’s European Empire: Italian Occupation During the Second World War. Cambridge University Press.


Recent Conflicts and International Law

– Human Rights Watch. (2022). Ukraine: Mariupol Theater Hit by Russian Attack Sheltered Hundreds. March 2022.– Jamaluddine, Z., et al. (2024). Traumatic injury mortality in the Gaza Strip from Oct 7, 2023, to June 30, 2024: a capture–recapture analysis. The Lancet.– United Nations Office for the Coordination of Humanitarian Affairs (OCHA). (2023–2024). Occupied Palestinian Territory – Humanitarian Update.– Amnesty International. (2023). Israel and Occupied Palestinian Territories 2023 Report.– International Committee of the Red Cross (ICRC). (2015). Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects.


Artificial Intelligence and Military Strategy

– Russell, S. J. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.– Russell, S. J., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.– Turing, A. M. (1936). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(42), 230–265.– Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company.


Ethics, Philosophy, and Human Responsibility

– Arkin, R. C. (2009). Governing Lethal Behavior in Autonomous Robots. Chapman and Hall/CRC.– Arkin, R. C. (2010). The Case for Ethical Autonomy in Unmanned Systems. Journal of Military Ethics, 9(4), 332–341.– Asaro, P. (2012). On Banning Autonomous Lethal Systems: Human Rights, Automation and the Dehumanizing of Lethal Decision-making. International Review of the Red Cross, 94(886), 687–709.– Asaro, P. (2020). Autonomous Weapons and the Ethics of Artificial Intelligence. In S. M. Liao (Ed.), Ethics of Artificial Intelligence (pp. 212–236). Oxford University Press.– Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.– Sharkey, N. (2008). The Ethical Frontiers of Robotics. Science, 322(5909), 1800–1801.– Sharkey, N. (2012). The Evitability of Autonomous Robot Warfare. International Review of the Red Cross, 94(886), 787–799.– Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77.– Sparrow, R. (2016). Robots and Respect: Assessing the Case Against Autonomous Weapon Systems. Ethics & International Affairs, 30(1), 93–116.– Wallach, W., & Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press.


Contemporary Military Technologies and Applied AI

– Israeli Ministry of Defense. (2021). Iron Dome Fact Sheet.– Raytheon Technologies. (2023). NASAMS Air Defense System Overview.– Palantir Technologies. (2022). AI-Powered Situational Awareness Platforms for Defense.


Official Documents

– Mattarella, S. (2025). Speech on the Occasion of the 164th Anniversary of the Founding of the Italian Army. Quirinale, Rome, May 2025.

 
 
 

Comentários


bottom of page