The legal question before us is narrow in form and vast in consequence. Are we witnessing the emergence of opinio juris that would bind States to an obligation of human oversight over lethal autonomous and other military AI systems? The short answer is: not yet, but important pieces are moving into place. What follows is a reasoned appraisal of where state practice and state belief about legal obligation currently stand, and what that means for efforts to convert ethical norms into hard law.
Opinio juris is the subjective element of customary international law. It requires more than repeated behaviour; it requires a belief that the behaviour is required by law. This two pillar test of customary law rests on general practice plus acceptance as law. By that standard, the formation of a binding rule is a demanding process, not a policy trend dressed in legal language.
Evidence of state practice relevant to human oversight has become more visible in the last three years. The multilateral process in the Convention on Certain Conventional Weapons has matured from annual debates into a technical negotiation with a Chair s rolling text that now collects detailed proposals on characterisation, legal application, prohibitions and regulatory measures. That rolling text and the supporting background work show a convergence of attention on human judgement and control as a central, regulable concern. The GGE s work does not yet convert into a treaty, but it is unambiguous proof that States treat human oversight as a subject for collective, normative ordering.
International humanitarian organisations and civil society have pushed forcefully toward converting that political attention into law. The International Committee of the Red Cross has repeatedly urged preservation of human control over the use of force and in 2025 renewed its call for a legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems. Non governmental organisations such as Human Rights Watch continue to press for prohibitions on systems that operate without meaningful human control and for positive obligations to ensure oversight where autonomy is permitted. Those interventions matter because they shape the content of debate and the expectations that delegations carry into diplomatic negotiations.
At the same time, important state practice points in the other direction. The United States has promoted a non binding, norm building approach encapsulated in a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy and in a substantive update to the Department of Defense s Directive 3000.09 on Autonomy in Weapon Systems. That update reasserts a preference for design and governance measures that preserve appropriate levels of human judgement, and it embeds oversight into buying, testing and fielding requirements. But these are doctrinal commitments and administrative obligations rather than a statement that the United States accepts an external legal prohibition that it feels itself bound to obey.
This tension is the crux of the opinio juris problem. Many states, international organisations and NGOs now articulate that human involvement is necessary, desirable and functionally required to ensure compliance with international humanitarian law. But the political record shows two different modalities of articulation. One strand calls for a new binding treaty that would forbid certain classes of autonomous weapons and require meaningful human control. Another strand prefers to treat oversight as a matter of responsible practice and compliance with existing law, leaving implementation to national policies, doctrinal constraints and technical controls. The first strand argues that a legal prohibition will close accountability gaps and prevent dehumanisation of targeting decisions. The second strand warns that a blanket prohibition would be technologically naive and strategically risky. Both positions supply evidence of normative expectation. Only one, however, seeks formal legal obligation.
How does the evidence line up against the customary law test? First, on general practice: there is growing state practice in the sense that multiple states have adopted guidelines, doctrines and domestic rules that require forms of human supervision in design and deployment, and many delegations at CCW have insisted on some version of human judgement as a constraint on autonomy. The DoD directive and the U.S. Political Declaration are prominent examples of concrete practice. These practices are, however, heterogeneous in scope and are often framed as discretionary policy rather than as legal compulsion.
Second, on acceptance as law: the record is mixed. A substantial group of states, together with the ICRC and many NGOs, clearly argue as a matter of legal duty that certain autonomous capabilities should be prohibited or tightly constrained. But several major powers have resisted the language of legal obligation and prefer non binding norms or tailored regulatory regimes. This ambivalence matters. Opinio juris is not shown by aspirational statements alone. It requires clear evidence that States act out of a sense of legal duty, not merely prudence or policy preference. On the available record through late 2025, the necessary community wide belief in a legal duty to maintain specific forms of human oversight has not yet cohered.
There is, however, a plausible intermediate conclusion. A limited form of customary expectation is consolidating in specialised domains. For example, weapons that are incapable of being reliably controlled or that are inherently indiscriminate are already treated as illegitimate across much of the international community. The debate now is about borderline cases where design controls, context of use and operational constraints might make autonomy permissible. In those borderline cases, States are increasingly converging on the idea that human judgement must be preserved somewhere along the weapon system lifecycle, whether through design, pre deployment constraints, or direct real time control. That is not full opinio juris yet, but it is road building toward a more robust legal consensus.
There is a technical and moral reason for urgency. Autonomous systems that incorporate learning algorithms create risks of unpredictability, reward hacking, and emergent behaviour that make retrospective allocation of responsibility difficult. Scholars and technologists have highlighted how those technical risks frustrate simple narratives of control and complicity, which in turn makes legal attribution fragile unless duties and control architectures are specified in advance. A legal norm that merely says human oversight is required, without specifying what that oversight must be and when it must be exercised, will not close responsibility gaps. Operationalisation is the hard part.
What should ethicists, lawyers and policy makers do now if they want opinio juris to form? First, move beyond slogan to specification. Norm entrepreneurs should translate meaningful human control into operational standards: clear requirements for predictability, human understanding of system limits, verifiable testing regimes, and chains of command that anchor responsibility. Second, attach legal consequences to failure to maintain specified oversight. Criminal, disciplinary and civil liability frameworks must be made plausible for officers, commanders and contractors who authorise or field systems that they cannot lawfully supervise. Third, insist on verification and transparency in multilateral fora. If the CCW rolling text process produces measures that are subject to verification and peer review, it will help convert normative expectation into the belief that states are obliged to act a certain way. Fourth, accept that hybrid pathways are realistic: some prohibitions will be negotiated; some operational requirements will initially be non binding but subsequently harden into law by repetitious practice plus statements of legal obligation.
Conclusion. The international community is at a formative moment. Human oversight has moved from ethical exhortation to diplomatic architecture and domestic doctrine. These are necessary preconditions for a customary obligation, but they do not in themselves complete the legal act of bindingness. Opinio juris on human oversight over military AI is embryonic: visible and influential, but not yet mature. If scholars and states wish to see a rule that can be enforced, the next phase must be the hard work of specification, enforcement architecture and repeatable, clearly motivated state practice expressed as legal duty. Without that work, we will have norms that guide behaviour but not obligations to which a court or a counterparty can hold a State accountable.