The United Nations General Assembly’s adoption of resolution A/RES/79/62 in December 2024 crystallized a geopolitical reality that many of us have long warned about: states now expect to address lethal autonomous weapons systems at the level of the full Assembly rather than leaving debate solely to closed technical fora. The resolution passed with overwhelming support, reflecting a broad anxiety about removing human decision from use-of-force choices and a growing demand for concrete international action.

Critically, the resolution instructs the Assembly to convene open informal consultations in 2025, with the objective of complementing work already being pursued under other disarmament instruments and of widening participation to all Member States, observers and relevant civil society actors. That choice of venue - the General Assembly rather than an exclusive treaty negotiating table - is both an opportunity and a test. The General Assembly can be an engine for political momentum and normative clarity. Yet without careful design, it risks producing rhetoric that substitutes for binding rules.

Civil society and human rights organizations have responded to this opening with urgency. Groups that have campaigned for a ban or strict regulation of so-called killer robots view the GA space as a necessary expansion of the debate, one that can surface humanitarian and human rights ramifications often sidelined in purely technical conversations. Their political pressure helped to produce the decisive vote in December, and their evidence and moral framing will matter if the consultations are to yield substance rather than symbolism.

On the technical plane there is no facile reassurance. Recent technical analyses have made plain that advanced autonomy brings systemic risks: opacity in decision pathways, susceptibility to adversarial manipulation, brittleness to distributional shifts between test and operational environments, and emergent behaviors that evade simple verification. These are not hypothetical hairline flaws. They are structural features of the machine-learning paradigms that underpin many candidate systems, and they complicate any regime that assumes neat, auditable lines of causation between sensor input and lethal effect. Treaties that ignore these realities will be brittle in practice.

Legal scholars and policy analysts have therefore begun to map realistic architectures for international instruments. One widely discussed idea is a two-tier approach: prohibitions on a narrow class of systems that cannot, by their nature, comply with international humanitarian law, combined with robust regulatory measures for other systems that retain meaningful human control. Such architectures attempt to reconcile normative urgency with technical nuance, but they also raise hard questions about definitions, thresholds, verification and enforcement. Who judges whether a particular algorithmic function places a system beyond the pale? How do we verify compliance when core technologies are dual-use and widely disseminated? These are not merely legal puzzles. They are engineering, economic and strategic problems rolled into one.

Institutional design will determine whether the informal consultations are preparatory steps toward a binding instrument or merely a theatrical airing of positions. The Convention on Certain Conventional Weapons offers one pathway but it is governed by a consensus practice that historically slows progress. The General Assembly space offers inclusivity and political salience but lacks the automatic binding force of a treaty. To make real progress the consultations must produce a clear timetable, agreed procedural steps toward the negotiation of a legally binding instrument and an inclusive mechanism for technical input and verification design. The moment calls for method, not moralizing alone.

Practical content must follow. Any credible treaty architecture should contain several linked elements: a clear, operational definition of prohibited capabilities and functions; an articulation of the standard of meaningful human control and how that standard is to be operationalized in doctrine and design; export and transfer controls adapted to dual-use autonomy frameworks; verification and inspection modalities that address software and data as well as hardware; and an accountability regime that assigns responsibility up the chain from operator to commander to manufacturer when misuse or unlawful harm occurs. These elements will be technically and politically difficult, but the alternative is a diffuse patchwork of national rules that will incentivize arms races and erosion of humanitarian norms.

Finally, negotiators must accept that technology policy is not merely about constraint. It is also about shaping incentives. Treaty text that narrows harmful design choices, specifies compliance tests and funds transparency measures will steer research and procurement toward safer architectures. If the consultations produce only exhortation, private actors and competitive militaries will continue to value capability over caution.

My counsel to delegations preparing for the informal consultations is straightforward: insist on a process that leads to negotiations with legally binding force; insist on participation from engineers and independent auditors so that rules are informed by the limits of current technology; and insist that the resulting instrument include concrete verification and accountability mechanisms. Ethical rhetoric without mechanism will not constrain design choices made under the glare of battlefield exigency. We can choose a path that preserves human dignity and accountability in the face of automation. Or we can accept a future in which machines make decisions that humans will later be asked to justify. Philosophy and policy intersect here, and the cost of error is not merely reputational. It is moral and mortal.