AI Weapons in Conventional Warfare: Accountability and Ethics

+4

Modern warfare is undergoing a technological transformation. Artificial intelligence, once limited to predictive and analytical tasks, is now being integrated into weapons systems capable of autonomous action. As states experiment with AI-powered autonomous weapon systems (AWS), serious concerns arise regarding accountability, ethics, and the absence of human judgment. This article examines the legal and moral dilemmas associated with delegating lethal authority to machines.

Accountability Vacuum in Autonomous Warfare

International law was formulated in the 18th, 19th, and 20th centuries, a time when concepts like artificial intelligence did not exist.

So, AI turning into killer robots was the wildest of hypotheses, only considered by a handful of people, or possibly, no one.
International law, as established under the UN, could not be based on a mere hypothesis, so the case of AI turning into autonomous weapons was not considered in the making of laws.

Due to this lack of defined laws to regulate the use of AI in warfare, and as there is no effective way to prosecute an AI system, an accountability vacuum is formed.

In simple words, the question is: “If AI powered weapons commit a war crime, who should be prosecuted? The coder, the commander, the soldier, or the AI itself?” And as mentioned earlier, the last option seems to be the most sensible but is not practical until humans develop a method to effectively prosecute AI.

Militaries have already started developing AI-powered autonomous weapon systems, and some systems have already been deployed in battlefields.

Yet, we have no law system to regulate the development, deployment, and use of these systems.

Due to this lack of a law system, evading accountability for AI weapons becomes easy.

This means that if AI-powered weapons commit a major war crime, the prosecution process will stay unresolved for a long time, possibly forever.

Even if effective attribution frameworks were to emerge, accountability would remain incomplete unless the internal decision-making processes of AI systems themselves are examined for fairness and reliability.

Algorithmic Bias and Target Misidentification

A major problem faced by all AI systems is one faced by humanity for centuries.

Cases of algorithm biases arising in AI have started emerging on a large scale.

Algorithm bias is when the system produces unfair, systematic errors which privilege certain groups based on characteristics like race, gender, nationality, etc.

AI has been reportedly known to favour outwardly masculine faces more than outwardly feminine faces.

Cases of such biases have been seen in recent warfare, where targets, which were militants, were loosely defined as ‘military-age men’.

This doesn’t just weaken AI’s ethical structure but also increases the chances of the loss of innocent lives only because a system considered them a part of a social body; they aren’t a part of.

Moreover, an AI system coded in the US would usually portray the war in Iraq as a net-positive unless explicitly asked not to, as its sources are mostly US-based, while an AI coded in the Middle East would follow a counternarrative.

AI doesn’t ‘make’ propaganda. It strengthens the existing narratives in a way which it might end up encouraging violence against a party. This is nothing less than a recipe for disaster if the system gets the autonomy of deciding and attacking targets by itself.

While algorithmic bias exposes the structural flaws embedded within AI systems, it also raises a broader concern: whether delegating life-and-death decisions to machines can ever substitute human judgement, discretion, and moral responsibility in warfare.

Lack of Human Judgement and Ethical Decision Making

One of the only reasons civilization has not perished in any war is because human judgment prevented us from taking extreme measures that would have caused mass destruction.

Humans see people in Gaza as humans. AI sees them as numbers.

It would not be wrong to state that AI will be more ruthless on the battlefield than humans have ever been, unless the systems are highly regulated.

This is a major problem, as when warfare evolves, its consequences evolve too. To suit constantly changing dynamics of warfare, AI systems will have to be recoded often, increasing the scope for errors. It would not be hard to predict problems, but it will be hard to fix them.

The widespread use of increasingly destructive weapons would further increase the threshold of destruction caused by the slightest flaws in code.

With the absence of human judgement and ethics in AI systems, giving them the autonomy to decide and attack targets would turn them into killing machines that would almost definitely cause large amounts civilian casualties.

Moreover, the chances of AI overlooking international humanitarian laws like the ones enshrined in the Geneva Conventions remain dangerously high. AI cannot distinguish a wounded soldier from a hostile one easily on its own. Without reasonable amounts of human oversight, it would be a huge mistake to arm AI with the power to decide targets on its own.

The problem also lies in the fact that AI does not know where to stop. A human would think twice before bombing a civilian structure. But a faulty code, or even a miscomprehended order, can make AI cause unprecedented amounts of harm.

This convergence of legal ambiguity and ethical distortion underscores the urgency of addressing autonomous weapons not as isolated technologies, but as systemic challenges requiring regulatory intervention.

The integration of artificial intelligence into weapons systems raises unprecedented moral and legal dilemmas. Without clear accountability mechanisms, bias mitigation, and meaningful human oversight, fully autonomous AI weapons risk undermining the very foundations of humanitarian law. Before technological capability advances further, ethical and regulatory safeguards must catch up.

2 Comments
+1
Level 62
Feb 28, 2026
This was a very interesting read.
+1
Level 56
Feb 28, 2026
Thank You!

Also read part 2 of this series...