Seyyed Javad Mohammadi – Artificial Intelligence Expert
From Technological Revolution to the Transformation of Warfare’s Nature
In recent decades, technological developments have fundamentally altered the nature of warfare. If military power in the twentieth century was defined by heavy weaponry and industrial capacities, in the twenty-first century, algorithms, data, and intelligent systems have become determining elements.
Analyses published by the Stockholm International Peace Research Institute (SIPRI) indicate that competition among great powers is increasingly shifting toward the domain of military artificial intelligence. This trend encompasses not only the development of autonomous weapons but also the integration of AI across all levels of military operations—from target identification to tactical decision-making.
In this regard, reports from Foreign Policy on recent conflicts, particularly in Ukraine, demonstrate that the use of algorithms in guiding drones and analyzing battlefield data has become a critical element. This transformation has increased the speed and precision of operations but has simultaneously widened the gap between decision-makers and the human consequences of their actions.
Militarization of Artificial Intelligence: A Threat Beyond Nuclear Weapons
Unlike nuclear weapons, whose use has been constrained due to their catastrophic consequences, military artificial intelligence is rapidly expanding and faces fewer legal and ethical barriers. Moreover, AI-based weapons can operate without direct human oversight, a characteristic that creates new dangers.
Some experts believe that artificial intelligence could evolve into weapons even more dangerous than nuclear arms. The reason lies not in their destructive power but in their potential for widespread deployment, lower costs, and a reduced threshold for use.
Reports from Stanford University also highlight the regulatory challenges of this technology and demonstrate that the gap between technological development and the formulation of legal frameworks is rapidly widening. This gap exacerbates the risk of unregulated use of artificial intelligence in warfare.
From Testing Ground to Human Catastrophe: The Minab School Incident
The application of artificial intelligence in military operations, when coupled with incomplete data or algorithmic biases, can have catastrophic consequences. In recent attacks against Iran, reports have emerged regarding the use of AI-based systems for identification and targeting, indicating that decision-making in certain instances has been delegated to algorithms.
One of the most tragic examples of this trend is the attack that resulted in the martyrdom of 168 students in Minab. This catastrophe demonstrates how reliance on automated systems, without adequate human oversight, can lead to fatal errors.
Under such circumstances, the fundamental question arises: who bears responsibility for these errors? Can an algorithm be held accountable, or should this responsibility be attributed to its designers and users? This question represents one of the foundational challenges of international law in the age of artificial intelligence.
The Erosion of Ethics in Warfare: From Human to Machine Decision-Making
One of the profound consequences of militarizing artificial intelligence is the gradual erosion of ethics in warfare. In traditional conflicts, decisions regarding the use of force—however pressured—were ultimately made by humans. This ensured at least a minimum degree of moral responsibility.
With the advent of artificial intelligence, this relationship has been disrupted. Algorithms operate based on predefined data and patterns and lack human understanding of concepts such as suffering, dignity, and proportionality. This characteristic increases the risk of transforming warfare into a purely technical process devoid of human considerations.
The development of new technologies without regard for human rights can lead to widespread violations of these rights. In the military domain, this risk is significantly greater, as decisions made are directly linked to human lives. In such an environment, the concept of responsibility also becomes ambiguous. When an attack is conducted by an automated system, determining who should be held accountable becomes a complex challenge. This situation can lead to a form of structural irresponsibility in which no actor is fully answerable.
Conclusion
Future warfare has become, more than ever before, an arena in which technology and military power are completely intertwined. Artificial intelligence, as one of the most significant of these technologies, has not only altered methods of warfare but has also challenged fundamental concepts such as responsibility, ethics, and legitimacy.
Recent experiences, including the Minab School atrocity, demonstrate that unregulated use of these technologies can have catastrophic human consequences. Under such circumstances, the necessity of formulating new legal and ethical rules is more pressing than ever. However, the reality of the international system suggests that competition for technological superiority may override these considerations. Consequently, the future of warfare will be shaped less by ethical principles and more by algorithms and the logic of power.


0 Comments