The armed conflict against Iran launched on February 28th by Washington and Tel Aviv was quickly labeled as the “first AI war.” This assertion is misleading in several respects. Not only has AI been extensively used in recent conflicts, especially by Israel in Gaza, but more broadly, AI, as a digital means of data processing and analysis, has a long history with armed conflicts, dating back to World War II.
The Iranian situation stands out due to the unprecedented level of sophistication of these means and the unprecedented reliance of the armies on them. It also differs from the conflict in Gaza in that this time, AI was deployed against a state adversary in a high-intensity war. Finally, never before had states so openly communicated about their use of these systems. This communication, combined with the dramatic consequences of some strikes, raises questions about the compatibility of these practices with international law.
Context: The article discusses the use of AI in the conflict between the US and Iran.
Fact Check: The article addresses the use of AI technology in warfare and the implications it has on international law and ethics.
The facts: the use of AI in the war in Iran
The use of AI by Israel in its war against Hamas was exposed by the journal +972. In the conflict in Iran, however, it was the American authorities themselves who announced their use of AI.
The US military forces admitted to using AI systems to establish and sort through a list of targets at lightning speed. This process led to more than 1,000 highly precise strikes within the first twenty-four hours of the conflict. They specifically utilized the Maven Smart System, a joint project using AI software for surveillance and data collection from Palantir, combined with the generative AI system Claude, developed by Anthropic.
However, on the first day of the war, one of the American strikes hit a school in Minab, causing the deaths of around 170 civilian victims, mainly children. The US acknowledged their responsibility for this strike, which was presented as an error. The school was located near a naval base of the Revolutionary Guards and had been separated from the same complex. Outdated information may have led to authorizing the strike.
Context: The article provides details on the specific use of AI technology in military operations in Iran.
Fact Check: The article highlights the use of AI in targeting decisions and the unintended consequences of such actions, particularly in civilian casualties.
The legality of AI use
Regarding the legality of using AI for these strikes and the error committed, it is necessary to clarify that AI itself is not prohibited by the laws of armed conflicts (LOAC, also known as international humanitarian law). Currently, there is no specific legal rule addressing the legality of AI. However, this issue does not exist in a legal void. The general rules of LOAC apply to the conduct of hostilities, regardless of the means and methods deployed.
One of these rules is the principle of distinction, which states that only military targets can be attacked, with civilians and civilian property to be protected. Directly targeting a school like the one in Minab, with no military objective inside, represents a clear violation of this principle. However, it is unlikely that the American military intended to deliberately destroy the school itself. It was more likely an identification error of the target, possibly related to an AI system operating on outdated data from the period when the building was still part of the naval base.
As a result, the violation is more related to the principle of precaution. This principle requires parties to the conflict to take all practically possible measures to verify that the targets being attacked are indeed military objectives. In this case, the US military did not seem to perform the necessary verifications to ensure that the target was a school. Additional verification, like that done by some media outlets, could have quickly dispelled any doubts.
Context: The article examines the legal implications of using AI in warfare, especially concerning compliance with international humanitarian law.
Fact Check: The article explores how the legal framework applies to the use of AI in military operations and the responsibilities associated with such actions.

/2026/04/22/69e884d5d0e71062318296.jpg)



