By Amit Kapoor and Mohammad Saad

The ongoing Iran–USA–Israel conflict has revealed the evolving capabilities and risks of modern warfare. What began as an attempt to reshape the regional political landscape has become a sustained conflict with ripple effects across energy markets. Among the unexpected developments emerging from this war is the growing focus on AI’s role in modern warfare. From rapid target identification and strike execution to air combat vehicles and propaganda campaigns, AI is opening new operational fronts. AI-driven military operations are not new, yet with the world still lacking clear guardrails to govern such systems, the situation invites comparisons to the early nuclear age. It took decades and several near catastrophes before meaningful safeguards emerged. AI presents a similar tipping point. With the technology being deployed in real time, a global response is becoming increasingly important, as this moment carries profound implications for global peace and the AI ecosystem itself.

While primitive forms of AI have been part of military operations for decades, the Russian invasion of Ukraine is a visible illustration of modern AI use at scale. To counter Russia’s numerical superiority, Ukraine resorted to cheap drones, which enabled attrition warfare. Russia countered with similar drones, and the battle entered a new phase when Ukraine deployed AI-powered targeting systems. These systems allow drones to identify and strike targets with minimal human intervention. Ukraine expanded the usage of AI-powered systems on the ground, with reports indicating the country tested over 70 domestically developed unmanned ground vehicles.

The Russia-Ukraine war demonstrates that much of AI’s battlefield potency lies in compressing the ‘kill chain’, which is the sequence from identifying a target to authorising a strike. This process, which once took days or weeks of analysing satellite imagery, drone feeds and battlefield intelligence, has now collapsed into hours with AI’s help. It is this compression that apparently shaped the opening days of the Iran conflict. With US CENTCOM commander noting that AI was used almost every day, questions remain whether AI enabled such a high tempo of strikes.

Importantly, AI has opened a new front in warfare by enabling countries to use AI-generated videos and images to control the war narrative. It allows governments to justify the war, thereby maintaining political legitimacy and limiting internal dissent. As wars are no longer fought in isolation, AI-driven narrative control can help countries secure weapons, intelligence, and diplomatic support from allied nations. In the recent conflict, groups aligned with the Iranian government have released numerous “Lego” videos that shape the narrative in favour of the regime’s interests. Narrative control, which once depended on the coordinated support of national media, local newspapers, and diplomatic engagement across international platforms, has become more accessible with the help of AI.

Amid these developments, concerns have grown about the ethics of AI-powered weapons and the extent of military reliance on them. Although many systems still operate with significant human oversight and technical limitations, AI’s lack of intrinsic moral reasoning led to pushback against autonomous weapons. Consequently, armed forces have often justified the use of AI by ensuring that humans always stay ‘within the loop’. This means that AI helps identify targets, but the final execution stays in human hands. While the moral compass rests with human conscience, the level of intelligence AI can confer on military personnel, and the destruction it can enable, still raise serious concerns.

Additionally, AI-powered weapon systems are only as reliable as their data. If unreliable or faulty data is used, the risks scale enormously. Drones may end up hitting noncombatants or civilian infrastructure if their data is erroneous or has been deliberately manipulated via cyberattacks. A key concern is the uneven progress on reaching a global consensus on limiting AI-powered weapons.

While International Humanitarian Laws (IHL) and Law of Armed Conflict (LOAC) restrict military action against noncombatants, damage to civilian infrastructure and crimes against humanity, there are no legally binding international frameworks that govern the use of Lethal Autonomous Weapon Systems (LAWS). Although IHL and LOAC are largely acknowledged and complied with, countries such as the USA and Russia have often opposed legally binding instruments against AI-driven weapons. Thus, the technology has been in active use despite growing calls against LAWS. UN Secretary General, António Guterres has urged agreement on a legally binding instrument within the year, but experts feel the deadline is unlikely to be met.

With no treaty yet in sight, insecurity over advanced attacks can incentivize more countries to equip their forces with the technology.  Reports indicate that the Chinese military is actively looking to acquire AI systems that can counteract US warfighting advantages. Such developments reflect a cycle of arms escalation, in which countries continually pursue more advanced weaponry in response to the expanding capabilities of others. Last year, Austrian Foreign Minister Alexander Schallenberg warned that AI-driven warfare could lead to an uncontrollable arms race.

If an AI arms race were to materialize, the global AI ecosystem could fragment rapidly, with nations exercising some degree of AI autarky. This is because AI-driven weapons are most effective when a country controls its technology stack, and gaps in this control can work against national interests. Nations exercising limited AI sovereignty, such as India, may find themselves at a disadvantage compared with the USA and China, which exercise greater sovereignty over key technologies. Given that full technological sovereignty is not possible for all countries, they may be incentivised to align with AI superpowers, leading to the formation of AI blocs, likely centred around the USA and China.

These trends emphasize the need for faster progress toward legally binding instruments that govern the use of AI. While some argue that human supervision may suffice, it offers no guarantees in the absence of binding constraints that can hold states accountable for unjustified or disproportionate uses of AI in warfare. The legacy of the nuclear arms race is instructive, and responsible nations must lead efforts to limit technologies that risk enhancing offensive advantage to a point where moral judgment can get sidelined.

(Amit Kapoor is chair& Mohammad Saad, Researcher atInstitute for Competitiveness.X: @kautiliya).

The article was published with Economic Times on April 24, 2026.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

©2026 Amit Kapoor

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?