The first session of the 2024 Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems convened in Geneva earlier this year. Delegations met to begin work under the CCW framework on a mandate that is at once precise and portentous: to further consider and to formulate a set of elements of an instrument addressing LAWS.

To understand the significance of this meeting we must place it within the political architecture that produced it. In October 2023 the United Nations Secretary General and the International Committee of the Red Cross issued a joint call for States to negotiate a legally binding instrument on autonomous weapon systems, with the extraordinary target of concluding negotiations by 2026. The rhetorical weight of that appeal created an expectation that technical and normative debates would be accelerated.

The GGE is operating under a renewed, multi year mandate agreed by High Contracting Parties in late 2023. That procedural choice was deliberate and revealing. By setting a three year timetable and by tasking the Group to produce “elements of an instrument”, the CCW has sought to balance competing pressures: between those who demand a prohibitive treaty and those who prefer softer norms or technical measures. But even a carefully phrased mandate cannot erase a fundamental political divergence about ends and means. Observers and analysts noted substantive debate at the March session, and differing interpretations of what the mandate requires or allows. Progress will depend less on procedural wording than on the political will behind it.

Civil society and humanitarian actors were present and vocal in Geneva. The ICRC submitted formal recommendations to the Secretary General earlier in 2024 that argued for a legally binding instrument, and it emphasized the need to prohibit autonomous systems that directly target humans as well as those that are ‘‘unpredictable.ʼʼ These submissions are not technocratic flourishes. They are moral and legal markers intended to orient the state debate toward the protection of human dignity and the preservation of accountability in the use of force.

Campaign coalitions pressed similar themes in the GGE room. Representatives from advocacy networks urged that the Group consider explicit operational prohibitions and insisted that the notion of meaningful human control must carry concrete, enforceable obligations rather than remain a rhetorical safety valve. These interventions are important for two reasons. First they keep humanitarian law at the center of a conversation prone to be dominated by military planners and technologists. Second they expose a persistent asymmetry: states with advanced military technology can explore nuanced definitions of autonomy, while many other states and non state actors concentrate on categorical prohibitions to guard civilians.

What, then, can we reasonably expect from the GGE process over the coming months? The honest answer is guarded. The Group has been given a drafting task, but it lacks a single, universally shared endpoint. Several states view the task as one of technical clarification and risk mitigation. Others seek language that can form the scaffolding of a new treaty that limits or bans particular categories of systems. The result is an iterative exercise in translation: translating ethical concerns into legal text, translating technical capabilities into definitional boxes, and translating national security apprehensions into internationally acceptable limits. Each translation risks distortion.

There is a deeper philosophical tension under all of this. Weapons systems, like any technology, are a mirror of political judgment about who may decide on life and death. If the international community settles for a regimen of best practices, export controls, or transparency measures, it will implicitly accept a model of technological governance that privileges adaptation over prohibition. If it pursues a binding treaty with explicit prohibitions, it will stake a normative claim about the boundaries of legitimate violence in an age of machines. Both choices entail trade offs between expedience and principle, between the desire to preserve maneuver space for commanders and the imperative to safeguard human dignity. The GGE is the place where that trade off is being argued in technical, diplomatic, and sometimes philosophical terms.

For readers who follow robotics and autonomy the practical lesson is immediate. Technical ingenuity will not by itself resolve the normative questions about target selection, accountability, or predictability. Conversely, law and ethics divorced from realistic technical constraints will produce either unenforceable bans or vague prohibitions that are easy to circumvent. What the GGE must do is produce text that is both juridically meaningful and technically informed. That is an exacting standard, but it is the only standard that will yield durable rules rather than ephemeral assurances.

Finally, there is a policy imperative. States that care about maintaining human agency in warfare should invest now in the hard work of drafting, testing, and building consensus around specific elements of an instrument. Civil society and the scientific community can and should continue to shape the debate, but responsibility rests with sovereigns who will be asked to bind themselves. The coming months will test whether the momentum generated by the UN and ICRC joint appeal can be converted into an agreed architecture that curbs the most dangerous uses of autonomy while leaving space for legitimate defensive technologies.

Geneva is rarely where revolutions are announced. It is where the slow work of ordering norms occurs. The GGE has convened. The fundamental question remains whether that convening will become an engine of precaution or a furnace that tempers ethical ambition into the ductile metal of compromise. The answer will matter to soldiers, to civilians, and to the moral imagination of states who still believe that we can choose what machines are allowed to do in our name.