AI Weapons in Conventional Warfare 2: Strategic Stability & Solutions

+2

Beyond ethical and legal concerns, autonomous AI weapons pose strategic challenges that may reshape global power dynamics. From destabilizing deterrence frameworks to empowering non-state actors, the geopolitical consequences of widespread AI weaponization demand careful examination. This article explores these risks and evaluates potential solutions.

AI Weapons and the Disruption of Global Power Balance

As I wrote in my first article – ‘Between Two Giants’, an imbalance of power plays a major role in starting wars.

History demonstrates that wars are often enabled not by absolute superiority, but by perceived strategic advantage.

Take World War 2 for an example; Germany, Japan and Italy were strategically more powerful than their primary enemies, the UK, France and other states in terms of a conventional war. The USSR was not involved in the war due to the Molotov-Ribbentrop Pact, and nor was the US. This power imbalance gave the Germans the perfect opportunity to invade Poland and start a full-scale war, as victory seemed imminent, although the involvement of the US and the Soviets changed the course of the war later.

Power imbalance often fuels tyranny, but at present, the major polar parties have a similar amount of power. This keeps them from starting full-scale wars, as the outcome of such an event would be unpredictable and almost definitely catastrophic.

The introduction of nuclear weapons in the 1940s changed the dynamics vastly. The non-proliferation treaty only strengthens the power gap between the handful of nuclear-armed states and other nations. Large-scale development and deployment of AI-powered weapon systems would have a similar effect on the already fragile balance, as the states that did not have powerful militaries due to a lack of training in fields like precision and target identification no longer need to worry about the problem.

What worsens this is the fact that a non-proliferation style treaty will not be effective, for many states already have AI-powered weapons, and as AI systems can be incorporated into weapons far too easily, swiftly and secretively for it to be identified by any enforcing body as a violation of the treaty before it has caused the harm.

It would not be unjustified to believe that if a state like Eritrea or Afghanistan develops and deploys AI-powered weapon systems as strategically destabilizing as nuclear weapons are destructive, the global firepower balance would cease to exist.

Such a disruption of the global firepower balance does not remain confined to state actors alone; it inevitably lowers the threshold for access to force, creating opportunities for non-state actors to exploit advanced weapon systems.

Empowerment of Non-State Actors through Autonomous Weapon Systems

If AI weapons, with their unpredictability of sheer power, especially when paired with drones and loitering munitions, get into the hands of non-state actors (bodies which do not fall under the jurisdiction of any nation or any international body, or cannot be legally prosecuted), they turn into catalysts of disaster.

Terrorists have been using developing technology to enhance their firepower. These weapons can be used to support insurgency efforts or to inflict terror by targeting civilian structures.

The more autonomous the systems, the more the terrorists' immunity to prosecution. Autonomous weapons allow the terrorists to evade accountability, which is a major part of their strategy. Historically, this was done by commissioning suicide or Kamikaze missions, but autonomous weapons emerge as a far more practical tool, as it evades accountability quite easily and prevents the loss of a member of the organization.

The threat posed by non-state actors is further amplified by the digital nature of AI systems, which introduces vulnerabilities that can be exploited not only physically, but also through cyberspace.

Cyber Vulnerabilities in AI-Enabled Warfare

Just like all digital devices, AI is vulnerable to cyber-attacks.

A cyber-attack on several individually owned devices can cause a major disruption worldwide, as seen many times in recent history.

Such an attack on AI-powered autonomous weapons would obviously be more dangerous, as a single change in the code can lead to a huge difference in the battlefield.

Terrorism plays a major role here. Terrorist organizations will no longer need to smuggle weapons through borders. All they have to do is break the layers of firewall protecting the system and hack the code. Such efforts can prove very effective, as the weapon systems might be armed with firepower the terrorists would never be capable of obtaining on their own.

Although security measures will be taken, and it would be very hard to breach the system code, there is almost always a loophole.

In this case, it is not about the chances of something happening. It’s about the consequences of such an event, although it’s unlikely.

Cyber security on AI powered weapons will be what defines the safety of the weapons and that of humans.

The scale and consequences of such vulnerabilities indicate that technological risks alone cannot be addressed through military safeguards, but require structured governance responses at both national and international levels.

Solutions on the topic:

Non-Proliferation Treaty

When the world started developing nuclear weapons, countries signed the non-proliferation treaty to ensure global peace and order.

The treaty stated that no nation, except the ones which owned nuclear weapons at the time of the signing of the treaty, shall develop nuclear weapons.

Such a treaty for AI-powered weapons would make sense, if not for the loopholes, which are the same as the ones in the Nuclear Non-Proliferation Treaty.

-The power of liberty:
 States are not obliged to sign the treaty, as it violated the sovereignty of the state, and as i have
 states already, the upholding of state sovereignty is a duty of the UN, as it is enshrined in the UN
 charter.

-Power gaps:
 States which have already developed AI powered weapons would likely extend their lead in the arms
 race, leading to the disruption of global military balance. Such a treaty also risks reinforcing polarity,
 possibly making the Doomsday Clock tick closer to midnight.

Moreover, such a treaty would not be very effective, as many states have already developed AI-powered autonomous weapon systems, and keeping a lot of states as exceptions makes the treaty way too unfair for the weaker states.

However, a purely restrictive approach risks ignoring the dual-use nature of artificial intelligence, where outright prohibition may hinder applications that could otherwise improve precision and reduce collateral damage.

Uses of AI-Powered Autonomous Weapon Systems

Despite the vast number of problems with these systems, they can still be used for enhancing precision indirectly.

Surveillance:
-Despite having flaws, AI systems are used for surveillance and other facilities with human oversight.
 Such measures in conventional warfare would increase precision in surveillance and target
 identification without posing a direct threat to humans.

Warfare navigation:
-AI systems can provide substantial support for air and water-based warfare navigation.

This highlights the need to move beyond a binary choice between unrestricted autonomy and total prohibition, and instead explore frameworks that allow controlled integration of AI into military systems.

The Sweet Spot

Fully autonomous weapon systems pose a serious threat to humanity. But unlike nuclear weapons, AI powered AWS do not pose a threat due to their destructive threshold, but due to the accountability vacuum, unpredictability, and the lack of human judgement.

A feasible solution would be something not fully autonomous, nor something entirely human-guided. It would be something which has the precision and other advantages of AI, but with meaningful human oversight.

Semi-Autonomous weapon systems manage to uphold the balance of precision, ethical judgement, and accountability. This balance is crucial for the use of AI-powered AWS without raising serious concerns about humanitarian risks.

Semi-Autonomous systems eliminate threats that were present in fully autonomous systems like loitering munitions. This makes them undeniably a superior alternative.

How Semi-Autonomous systems dodge the risks:

Ethical concerns:

A ‘Human-in-the-loop' system eliminates the ethical concerns with AI powered AWS. A human should have the ability to manage the actions executed by the system, so if it might pose a huge risk, the operator can prevent major catastrophes.

Accountability:

The operator has the power to control the actions of the weapon system. So, if anything goes wrong, the operator can be held responsible, as they had the power to prevent the damage.

Moreover, an orthodox kill-switch equipped system would be an efficient and powerful way to manage AWS.

Semi-Autonomy removes most major risks associated with AI powered AWS, while keeping the benefits like improved accuracy and efficient target recognition.

Identifying this balance between human judgement and machine efficiency is central to ensuring that future warfare does not outpace ethical responsibility, legal accountability, and strategic stability.

As artificial intelligence continues to reshape the character of warfare, the challenge lies not in resisting technological change, but in governing it responsibly. Without timely regulation, meaningful human oversight, and international cooperation, AI-enabled warfare risks destabilising global security far beyond the battlefield.

Comments
No comments yet