We stand at an ethical bifurcation. One path leads toward a negotiated, legally binding constraint on autonomous systems used to apply lethal force. The other leads toward competitive militarization in which states, firms, and proxies race to embed greater autonomy into weapons because that is where the tactical advantage lies. Neither outcome is purely technological. Both are political, legal, and moral.

In 2024 the international architecture for addressing lethal autonomous weapon systems is active but fragmented. The United Nations and civil society renewed pressure for a treaty after a Secretary General advance report in early August urged stronger, binding steps and warned that machines that autonomously target humans cross a moral line. At the same time the Convention on Certain Conventional Weapons continues to host a Group of Governmental Experts that met through March and August to try to convert technical discussions into common language and, eventually, elements of an instrument. These forums show that momentum exists, yet they also make clear that translating anxiety into agreement is not straightforward.

Regional and domestic regulation is also reshaping the terrain. The European Union brought a comprehensive AI Act into force on 1 August 2024, creating a risk based regulatory scaffolding for AI more broadly. The Act signals that states can, and will, regulate AI at scale. It also demonstrates the political appetite for legal constraints on dangerous applications of algorithmic power even when those applications are dual use. That reality matters because weapons autonomy sits in a contested zone between civilian and military AI capabilities.

But treaty politics are difficult. A core strategic obstacle is that leading militaries perceive autonomous functions as potential force multipliers that reduce casualties among their own personnel and complicate an opponent’s calculus. The United States has explicitly resisted opening negotiations on a legally binding prohibition, preferring instead to shape norms and controls that preserve operational flexibility while insisting on human judgement where required. Other major powers stress different formulations, often distinguishing between unacceptable configurations and those they deem permissible. Those differences are not merely semantic. They go to whether an agreement would be prescriptive or permissive, narrow or broad, verifiable or symbolic.

Humanitarian and legal advocates make the counterargument that existing law will be strained if autonomous weapons proliferate. The International Committee of the Red Cross and broad civil society networks have argued that delegating life and death decisions to algorithms risks grave violations of human dignity and international humanitarian law unless strict limits are imposed. Their case is not simply rhetorical. It rests on real concerns about bias in training data, the brittleness of perception systems in contested electromagnetic environments, and the difficulty of attributing accountability when machine behaviour is emergent rather than preprogrammed.

Operational experience is already reshaping incentives. Conflicts and crises accelerate adoption. Where battlefield utility appears, imitation follows and suppliers proliferate. Observers in 2024 documented how AI and autonomy are moving from research labs and exercises into the operational domain, which compresses the time available for multilateral negotiation. This dynamic creates a perverse incentive structure. If one actor accepts constraints and others do not, the constrained actor may be at a strategic disadvantage. That is the classic logic of arms races. The result is that moral urgency and strategic prudence can point in opposite directions at the same time.

So what are the realistic choices? A blanket, universal ban negotiated and ratified by all major powers would be ideal from an ethical standpoint. It would erect a clear norm against machines autonomously targeting humans and set up verification and redress mechanisms grounded in international law. The political reality in 2024 is less accommodating. Deep disagreements about definitions, scope, and compliance make consensus difficult in fora that require unanimity. Moreover, the dual use nature of enabling technologies complicates export controls and monitoring.

If a fully universal treaty seems unlikely in the short run, practical realism suggests a three track approach that both reduces risk and preserves the possibility of later, broader agreement. First, codify non negotiable red lines now. The Secretary General and humanitarian organisations have identified the autonomous targeting of humans as one such red line. States should accept and publicly endorse that principle and embed it in domestic law and procurement rules. That would create a baseline of rejection for the most ethically fraught application.

Second, broaden transparency and confidence building. States should require publicly reported safety cases, legal reviews of systems used with lethal effect, and export controls targeted at autonomy enabling modules. The EU AI Act shows that regional governance can impose meaningful obligations without eliminating innovation. Similar regulatory architectures, tailored to defence and security applications, could raise the political cost of reckless deployment. Those measures are imperfect. They will not stop every bad actor. Yet they can slow diffusion, sharpen accountability, and provide negotiators with technical and legal common ground.

Third, create pathways toward a legally binding instrument that do not demand immediate unanimity. The UN General Assembly, operating by majority instead of consensus, offers a venue for normative progress when consensus fails. Parallel to multilateral diplomacy, technical norm development should be driven by mixed coalitions of states, industry, and independent experts to craft definitions, thresholds, and verification methods that make a future treaty practicable. Civil society convening and expert processes that use battlefield data and defensive experience will improve the credibility of any negotiated instrument.

Finally, remember the human judgement that cannot be coded away. Technology will continue to outrun wisdom unless institutions of accountability and legal restraint are built at the same pace. A treaty without teeth will be gesture politics. An unchecked race will make the discourse about accountability irrelevant when the first catastrophic misuse occurs. The prudent path in 2024 is therefore dual. Push urgently for binding rules that prohibit the most morally objectionable applications. At the same time, construct incremental, enforceable national and regional measures that change the incentive structure on the ground. Without both, we risk replacing human deliberation with algorithmic expedience and, in the process, losing leverage over the very technology we created to protect us.