The second session of the 2025 Group of Governmental Experts on lethal autonomous weapon systems will matter precisely because it is where abstract sketches must begin to assume legal and operational form. States agreed in 2024 to a two-part schedule for the GGE that places the second convening in September 2025, with a clear mandate to hammer out, by consensus, a set of elements of an instrument and other possible measures addressing LAWS.

That mandate has already pushed the GGE away from wide ranging expressions of concern and toward a concrete working document. The Chair circulated a revised rolling text on 12 May 2025 that structures the debate around five boxes: a working characterisation of LAWS; the application of international humanitarian law and notions of human judgement and control; prohibitions and requirements; regulatory measures including legal review and lifecycle obligations; and accountability and responsibility. The content of that 12 May draft is the terrain on which the September session will fight and, ideally, build.

Several important convergences are visible already. First, most delegations accept that international humanitarian law applies fully to LAWS and that some form of context-appropriate human judgement and control is necessary to secure compliance with distinction, proportionality and precautions in attack. Those formulations move us beyond rhetorical invocations of “meaningful human control” toward a life-cycle frame that includes design, testing, deployment, and post-incident investigation. This is the argument advanced in the Chair’s background paper and reflected in the Chair’s summary of the March session.

Second, the rolling text already contains both prohibitory language and permissive safeguards. It affirms absolute prohibitions against systems that are inherently indiscriminate or that cause superfluous injury, while also listing measures that States should adopt to ensure predictability, traceability, reliability and the ability to deactivate systems where necessary. Those twin tracks reveal the GGE’s attempt to thread two needles at once: to rule out the worst systems while allowing for regulated, lawful uses that preserve military utility. How States translate those prose imperatives into operational thresholds will be the defining political question at Session Two.

Yet old divisions persist beneath the surface of convergence. The characterisation of what counts as a LAWS remains fiercely contested. Some delegations argue for a broad, technology-agnostic formulation that captures any system able to identify, select, and engage targets without further human intervention. Others press for a narrower, cumulative articulation that avoids sweeping in long-established systems that can already be used lawfully under IHL. The dispute over the connective tissue of the definition, literally whether to use the conjunctive and or the disjunctive and/or, is not semantic quibbling. It determines the boundaries of future obligations and the potential reach of any prohibitions or controls. Expect that fault line to receive disproportionate attention in September.

A second fault line concerns how to operationalize human judgement and control. Several delegations and coalitions have moved the debate toward concrete measures: design and testing requirements, limits on real-time machine learning for target selection, restrictions on autonomous engagement counts, perimeter and geographical constraints, and requirements for clear human-machine interfaces and command chains. These are not mere bureaucratic add-ons. They translate ethical and legal desiderata into engineering and procedural obligations that militaries and industry must actually implement. The Chair’s technical background paper is explicit in urging examples and lifecycle thinking to clarify what context-appropriate human judgement and control can mean in practice.

Third, there is a political bifurcation between states emphasising prohibition and those emphasising regulation and responsible use. Some States and civil society actors will press for categorical bans on certain categories of systems or functions. Other States, including several that submitted working papers focused on definitions and management of autonomy, aim to preserve policy space for lawful, regulated uses. The working papers submitted after the March session show these competing emphases and indicate which coalitions may be seeking to convert convergences into concrete legal text rather than open-ended political language. Expect negotiators to test lobbying power and alliance-building in the intersessional lead up to September.

There is also a pragmatic, technical problem that political language often elides. Terms such as predictability, reliability, explainability and traceability sound coherent until engineers ask what operational measures will satisfy them. Explainability for a deep learning model is not the same as explainability for a deterministic control law. Predictability in a benign test environment is not the same as predictability in a dense urban combat zone with degraded sensors and adversarial countermeasures. If Session Two is to be substantive it must force negotiators to link normative terms to measurable, testable requirements and to the kind of legal review procedures that national authorities can actually carry out. The Chair’s draft already identifies legal review and testing regimes as central to the lifecycle approach. That is where legalists, technologists and military practitioners must converge, or the text will remain aspirational.

Finally, the procedural horizon matters. The GGE’s mandate envisages a set of elements to be agreed by consensus and to be advanced before the end of the GGE’s mandate period. That time pressure is both an accelerant and a risk. It pushes States to turn conceptual convergence into drafting results. It may also encourage lowest common denominator compromises that paper over hard questions. The responsible path is neither stalemate nor hasty dilution. It is to use Session Two to replace pluralist concept-talk with specific, operational clauses on characterisation, lifecycle obligations, legal review, transparency measures, and mechanisms for accountability and redress. Those are the building blocks of enduring norms that can govern autonomous weapons across changing technologies.

My expectation for Session Two is therefore modest and programmatic. Do not expect a negotiated treaty text in September. Expect instead an intensified conversation around the rolling text’s operative paragraphs, with particular attention to definitions, lifecycle requirements, and practical measures to ensure human responsibility and accountability. The real test will be whether delegations come prepared with workable, technically informed language that can be translated into national policy and procurement practice. If they do, the GGE will have moved from diagnosis to prescription. If they do not, we will have another year of moral insight and legal equivocation while the field’s technological momentum continues unabated.

The ethical stakes are straightforward. Autonomy in the use of lethal force is not a question about convenience. It is a question about who bears moral and legal responsibility for life and death decisions. The GGE can either entrench accountability and rigorous life-cycle constraints, or it can sanction a diffuse system architecture that diffuses responsibility and normalises distance from decision making. For scholars and practitioners who care about law, duty and human dignity, Session Two is a political opportunity to ensure that technology serves law and humanity, not the other way around.