The question is simple and unnerving. Can the international community agree, by 2026, on rules that meaningfully constrain the use of autonomous systems that can kill? The short answer is: maybe, but not in the form many campaigners imagine. If anything, the path toward any substantive instrument will be messy, partial, and political rather than purely legal.
There is momentum for rule making. In 2024 the Convention on Certain Conventional Weapons created a Group of Governmental Experts with an explicit mandate to draft elements of an instrument addressing lethal autonomous weapons systems. That process produced a rolling text and a series of state working papers during 2024, which makes the CCW the most concrete multilateral locus for treaty-level negotiations.
At the same time the United Nations First Committee recorded unusually strong support for a resolution on lethal autonomous weapons systems in November 2024. The vote signalled broad global concern about the humanitarian, legal, ethical, and security risks posed by autonomy in weapons. Civil society actors amplified the political pressure created by that vote, insisting that the CCW cannot indefinitely defer binding obligations.
Parallel to the formal UN tracks there is a fast growing diplomatic ecosystem setting norms. The REAIM process and associated summits have produced non-binding blueprints and calls to action aimed at harmonizing state practice on “responsible” military AI. These instruments are explicitly political and technocratic rather than juridical. They can help create default expectations, but they do not carry the enforcement mechanisms a treaty would.
Those normative pushes sit beside regulatory change in other domains. The European Union adopted a comprehensive AI Act in 2024, which creates a regulatory baseline for AI within the bloc even while exempting purely military applications. That law alters the political climate for AI governance and raises the reputational and economic costs of being perceived as a laggard on safety and transparency. It is not a weapon treaty, but it matters to how states and companies think about controls, audits, and liability.
Why a binding treaty by 2026 is unlikely in its maximalist form
Three technical and three political barriers together make a fully binding global ban on autonomous lethal targeting by 2026 improbable.
Technical barriers. First, defining the object of prohibition is hard. Autonomy is a spectrum and every definition invites loopholes. Second, verification is intrinsically difficult. Software and algorithms are malleable, dual use, and easy to hide behind claims of national security. Third, deployment contexts vary. A sensor-driven sentry system guarding a base is different from a swarming loitering munition hunting individuals; a one-size-fits-all prohibition struggles with these permutations.
Political barriers. Major military powers and technologically advanced states have incentives to preserve optionality. Doctrine and procurement cycles are long and military establishments will resist constraints they believe would cede advantages. Consensus decision making in bodies like the CCW makes a comprehensive, rapid treaty unlikely. Finally, fragmentation is real. A patchwork of coalitions of the willing, regional rules, and non-binding codes is the more likely near term outcome.
Possible 2026 outcomes worth taking seriously
1) A narrow CCW protocol. The most plausible binding result in a short window is a narrowly framed protocol agreed by a coalition inside the CCW. Such a protocol might prohibit systems that can select and attack human targets without meaningful human judgment while leaving other systems subject to national implementation measures. That outcome preserves legal force while limiting the scope to what reluctant states can accept.
2) A hybrid regime of norms plus standards. Expect a mix of political declarations, interoperability and safety standards, export controls, and mandatory weapons review regimes. The REAIM blueprint and EU regulatory momentum could seed technical standards for audit logs, human-machine interface requirements, and pre-deployment legal reviews. Those measures would be easier to adopt quickly and could be enforced through procurement and export licensing rather than a classic verification treaty.
3) Coalitions of the willing. Some like minded states may conclude bilateral or plurilateral agreements that effectively function as treaty-lite arrangements. Such coalitions can move faster, test verification mechanisms, and create practical templates for broader uptake later.
Verification and compliance: what will be on the table
Any credible instrument will need mechanisms that go beyond declaratory language. Possible mechanisms include: mandatory transparency reports about doctrine and fielded capabilities, common technical standards for event logging and audit trails, procedural requirements for legal weapons reviews, export controls on key autonomy-enabling subsystems, and a graduated inspection regime for hardware. Even these ideas are imperfect. Auditable black boxes and code escrow arrangements face deep resistance because states view algorithmic details as operationally sensitive and commercially proprietary.
A philosophical caveat about “meaningful human control”
Much of the current debate converges on the concept of meaningful human control. That phrase is attractive because it signals a moral boundary. The problem is that the concept must be operationalized. Without concrete specifications it becomes a rhetorical brake rather than a legal standard. Human operators fatigue. Automation bias leads to overreliance on machine outputs. If states write “meaningful” into law without specifying latency thresholds, decision authorities, audit requirements, and training protocols, the term risks becoming performative rather than substantive. This is a conceptual failure that treaty drafters must avoid.
What technologists and ethicists must do now
If 2026 is to produce anything more than political theater, technical experts, ethicists, and engineers must be at the table where clauses are drafted. That means providing achievable definitions, realistic verification tests, and practical audit specifications. It also means designing mechanisms that can be updated as the technology evolves. Treaties that ossify technical standards risk becoming either obsolete or dangerous.
Concluding judgment
A single, sweeping global ban on all autonomous lethal decision making by 2026 is improbable. What is plausible is a layered, hybrid regime that combines a narrow CCW protocol, stronger non-binding blueprints, regional regulation like the EU AI Act, and coalitions of states employing export controls and procurement rules. Such a regime would be imperfect. It could nonetheless materially reduce some risks while leaving gaps that will have to be managed politically.
If there is a strategic lesson it is this. We will not be saved by a single legal instrument alone. If posterity is to look kindly on our decisions, technologists and moral philosophers must work with diplomats to translate high minded principles into operable, inspectable obligations. Otherwise the machines will inherit the legal fog, and humans will inherit the moral cost.