Asymmetric wars are, by definition, contests in which advantage is unevenly distributed. Wealth, platforms, doctrine, intelligence, and legal oversight all sit on a spectrum. Over the past decade artificial intelligence and autonomy have not merely entered that spectrum. They have reordered it. When cheap sensors, off-the-shelf autonomy, and low-cost strike packages fall into the hands of actors who lack the institutional constraints and legal accountability of states, the ethical calculations that governed twentieth century armed conflict begin to fray.
Two basic ethical vectors matter in these settings. The first is the set of established humanitarian principles codified in international humanitarian law: distinction, proportionality, and precautions in attack. These principles presuppose human judgement, contextual understanding, and moral agency. Organisations such as Human Rights Watch argued early and forcefully that full delegation of targeting and firing to machines risks systematic violations because machines cannot exercise the complex, subjective judgement those rules require.
The second vector is accountability. If a weapon system acts unlawfully, who answers for the harm? State actors possess chains of command, judicial systems, and some mechanisms of redress. Non-state actors rarely do. Human Rights Watch and Harvard Law’s 2015 analysis warned that fully autonomous weapons would create an accountability gap in which programmers, manufacturers, commanders, and victim states could all plausibly avoid individual criminal responsibility. That gap is not an abstract legalism. It is a practical engine that can enable reckless use of force where remedies are weak or absent.
These ethical risks are not speculative. The diffusion of unmanned systems, loitering munitions, and enhanced autonomy has already altered asymmetric engagements. States and proxies have employed commercial and purpose-built drones to strike infrastructure, harass logistics, and create political leverage. The 2019 attacks on Saudi energy infrastructure and the proliferation of low-cost loitering munitions in several theatres show how inexpensive systems can produce strategic effect while complicating attribution and legal responsibility. In the 2020 Nagorno-Karabakh campaign and other recent conflicts, remote systems changed the operational calculus for actors who lacked symmetric conventional capabilities. The practical lesson is simple and uncomfortable: autonomy and AI amplify asymmetry by lowering entry costs for offensive effects.
That amplification creates a set of moral pathologies unique to asymmetric conflicts. First, lowering the political cost of violence makes kinetic options more attractive to weaker actors, increasing the risk of escalation and harm to civilians. Second, ambiguity about provenance and control enables plausible deniability, which corrodes norms of restraint. Third, algorithmic decision-making can reproduce societal biases in chaotic front-line data, exposing marginalized populations to disproportionate harm. Both humanitarian agencies and disability-rights scholars have flagged that automated systems tend to embed and magnify discrimination when their training data and operational profiles are not scrutinised.
Technical fixes will not, by themselves, resolve these moral problems. Calls for ‘‘meaningful human control’’ attempt to graft ethical constraints onto system design and doctrine. The notion is useful as a normative anchor but it is slippery in practice. What constitutes meaningful control in a fast, large-scale swarm environment? How much time and situational awareness must an operator have to be morally responsible for a machine’s lethal act? Scholarly work urging a nuanced, context-sensitive model of meaningful human control is right to press for practical criteria, but those criteria will not be universally achievable in many asymmetrical engagements.
International governance has reacted, unevenly. The Convention on Certain Conventional Weapons has hosted repeated intergovernmental discussions and a Group of Governmental Experts charged with wrestling these questions. By 2023 states had reconfirmed that international humanitarian law applies to new systems while disagreeing on whether new legally binding instruments are necessary. At the same time, initiatives such as the US-led political declaration on responsible military AI use sought to establish non-binding norms around human oversight and auditability. Norms are necessary but insufficient. Consensus on enforcement and on preventing proliferation to irregular forces remains elusive.
Policy responses must proceed on three parallel tracks. First, preserve and strengthen human judgement where moral stakes are highest. That means insisting on human-in-the-loop authority for targeting decisions that involve lethal outcomes, and designing systems so that override, interrogation, and audit are technical primitives rather than afterthoughts. Second, harden export controls, supply-chain transparency, and technical standards to reduce the capacity of non-state actors to field lethal autonomy at scale. History shows that modest, inexpensive adaptations can convert commercial platforms into weapons of consequence. Third, close the accountability gap by clarifying lines of criminal, civil, and state responsibility for autonomous systems’ actions. Precommitment to investigatory mechanisms and victim compensation schemes will make law meaningful rather than ceremonial.
Finally, there is a moral argument that should not be obscured by legalitis or technical optimism. Delegating life-and-death decisions to machines is not merely a matter of compliance with rules. It is a statement about the kind of agency we want to preserve in war. If human dignity is to remain a touchstone of humanitarian ethics, then the normative community must ask whether certain delegations are, in principle, incompatible with that dignity. Many civil society organisations and ethicists answer in the affirmative. The practical question for those who design, fund, and deploy military AI is whether marginal gains in force efficiency are worth the permanent erosion of human moral responsibility in war.
Asymmetric conflicts will not wait for perfect treaties. The ethical burden lies with states, technologists, and institutions to act now. We must temper the seductive logic of automation with legal clarity, export discipline, and institutional humility. If we fail, the next generation of conflicts will teach the lesson the hard way: machines can change battles, but only humans can be held to moral account for how those battles are fought.