The international conversation about lethal autonomous weapons has matured from alarm to technical debate and then to competing policy initiatives. What began as an urgent moral appeal from civil society and some states to prohibit the automation of killing has, in the last five years, become a tortuous process of norm building inside the Convention on Certain Conventional Weapons and a proliferation of national and plurilateral frameworks that stop short of binding law. The result is neither clean progress nor outright failure. It is a contested middle ground that reveals where law meets politics and where philosophy meets engineering.

The institutional locus for a treaty has been the CCW, through its Group of Governmental Experts. The GGE has produced sustained analytical work on the legal, technical, and operational questions raised by autonomy in targeting, but its mandate and the CCW rules require consensus. That procedural requirement, combined with divergent security interests, has repeatedly constrained the group to produce admissions of concern and lists of possible approaches rather than a negotiating text for a binding instrument. At times this has led to procedural stalls, for example when the format of meetings themselves became a point of contention.

If we read state practice, two broad camps emerge. One camp, including a mixture of small and medium powers, many Latin American states, and a vigorous civil society front led by the Stop Killer Robots campaign, presses for a new legally binding instrument that would bar certain classes of autonomous systems or at least require retention of meaningful human control over the use of lethal force. This camp has advanced concrete proposals, including roadmaps and protocol-style approaches, arguing that incremental good practice will not remove the fundamental moral and legal risks.

The other camp, which includes major military powers, has favored non-binding standards, principles, or codes of conduct. In February 2023 the United States unveiled a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, a voluntary, nonbinding framework meant to shape behavior without foreclosing future military uses of autonomy. Proponents argue that such pragmatic instruments can be adopted more rapidly and can accommodate legitimate defense needs while promoting safe practice. Critics respond that voluntary norms lack enforceability and will not prevent states from pursuing asymmetric advantages in battlefield autonomy.

Technical and conceptual difficulties compound the political ones. The idea of meaningful human control has become the hinge on which much of the debate turns, but the phrase is malleable. Humanitarian law actors, including the ICRC and several legal scholars, have emphasized that a requirement for meaningful human control implies operational constraints on design, predictable performance, and human supervisory structures across the lifecycle of a weapon system. Engineers point out that autonomy exists on a spectrum and that many systems already employ machine assistance in target detection and decision support. Reconciling an operational definition that is legally robust and technically realizable remains an open challenge.

The diplomatic record shows modest signs of normative convergence even as it confirms the lack of a binding solution. In October 2022 an unprecedented 70 states delivered a joint statement at the UN General Assembly urging limits and highlighting the need for human judgment and accountability. Simultaneously, several states tabled proposals inside the GGE for next-step measures, including a protocol approach and lifecycle regulations. Those moves signal growing international discomfort with unconstrained autonomy. Yet the CCW operates by consensus and influential states have resisted preemptive bans, preferring approaches that preserve capability development under the condition of compliance with international humanitarian law. The interplay of these forces has produced, so far, an accretion of norms rather than a treaty.

What does this mean practically? First, progress is real but fragile. We now have a clearer map of the ethical and legal fault lines. Civil society has succeeded in pushing the issue onto high diplomatic agendas. States are developing domestic policies, military doctrines, and cross‑border initiatives that embed principles such as human oversight, testing, and accountability. Those steps matter because they change expectations and create a patchwork of practices that can become de facto standards.

Second, stalemate is also real. The absence of binding obligations means that divergence in doctrine and investment strategies is likely to continue. A voluntary political declaration will not bind future governments that judge battlefield advantage to outweigh reputational cost. The consensus rule in the CCW means that any treaty worthy of the name will require diplomatic tradeoffs that some states are currently unwilling to make. The strategic incentives favor delay as long as the technology still promises a competitive edge.

Third, the locus of leverage is shifting from the single multilateral instrument to a multiplex of sites: national law, alliance norms, export controls, operational manuals, and certification regimes. This distributed governance could produce meaningful constraints if states and industry converge on rigorous engineering standards for predictability, auditable decision chains, and clear accountability mechanisms. It will not be enough to assert that humans remain “in the loop” if human judgement is replaced by automation bias or if the timescale of engagement precludes intervention.

Normatively, we must ask what kind of international order we want when life and death choices are mediated by machines. The temptation to postpone moral decisions to engineers is strong because it looks like the ethically neutral path; algorithmic objectivity masks value judgments embedded in design and deployment choices. If international law is to retain its normative force, it must either become more precise about the human role in lethal decisions or create enforceable ceilings on certain categories of autonomous functions. Otherwise law will become a veneer over an accelerating technological arms race.

In short, the state of treaty-making on lethal autonomous weapons is best described as constrained progress. There are important normative gains and a clearer vocabulary for regulation. There is, however, no binding international treaty as of April 2023 that comprehensively governs the development, transfer, and use of fully autonomous lethal systems. Whether that gap is temporary or durable depends on politics, technological trajectories, and public will. If we value human moral agency and legal accountability, then the international community must move beyond slogans and toward enforceable mechanisms that align technical feasibility with legal responsibility. Otherwise the moral residue of delegation will fall not on engineers but on all of us.