UNIDIR’s announcement on 6 March 2025 that it will convene a Global Conference on AI, Security and Ethics crystallizes a truth that many of us have been pointing to for years. The convening, scheduled for 27–28 March in Geneva, aims to bring diplomats, military practitioners, civil society and industry together under the institutionally neutral roof of UNIDIR’s Roundtable for AI, Security and Ethics. That combination of actors is the proper starting point for any attempt to move from abstract principles to operational governance, but it will not by itself resolve the deeper conceptual tensions that lie at the heart of military AI governance.

Two legal and political facts frame the background to this forum and make its agenda urgent. First, the United Nations General Assembly adopted a resolution in March 2024 urging the promotion of safe, secure and trustworthy AI and emphasizing human rights and transparency in AI design and deployment. Second, the First Committee’s late‑2024 work on the military applications of AI produced a mandate asking the Secretary‑General to solicit views from states and stakeholders on AI in the military domain. Those parallel trajectories in the General Assembly and First Committee create a diplomatic seam that a Geneva forum can productively stitch together. But institutional momentum is not the same as conceptual clarity.

What, then, are the substantive governance problems the conference must not aestheticize? I highlight three interlocking challenges.

1) Defining meaningful human control. The phrase has migrated from civil society slogans into official texts, yet it remains rhetorically elastic. In practice the problem is not merely whether a human signs off on a system’s action, but how command authority, situational understanding, and moral judgment are preserved when speed, sensor fusion and algorithmic recommendation compress decision timelines. If human control is only a legal checkbox appended to an automated timeline, we will have achieved form without ethical substance. Policy frameworks must therefore operationalize human agency in measurable ways: what inputs a human must have, how intent is communicated, and what latencies are acceptable for different mission profiles. This is a technical and doctrinal problem as much as a legal one.

2) Accountability and auditability across life cycles. Autonomous and AI‑enhanced systems are socio‑technical assemblages. Responsibility fractures across designers, integrators, commanders and political authorities. Effective governance must insist on provenance, verifiable logs and forensically sound telemetry so that when harm occurs there exists an evidentiary trail. That requires procurement standards, disclosure obligations for critical components, and independent testing regimes aligned with international humanitarian law. Multistakeholder fora such as UNIDIR’s can help define minimum chain‑of‑custody standards for AI in the military domain, but they must be followed by technical standards bodies and national procurement policy changes.

3) The politics of restraint versus the politics of advantage. States do not negotiate in a vacuum. Calls for strong prohibitions on certain classes of autonomous lethal systems coexist with legitimate security rationales for AI in defensive functions and force protection. The recent international movement to widen UN discussions on autonomous weapons and to solicit broader state and stakeholder input reflects that tension: diplomatic forums are necessary to avoid fragmentation, but they will struggle to restrain competitive dynamics without credible verification or linked incentives. The UN system’s recent steps illustrate an appetite for broader engagement, but engagement alone cannot substitute for concrete constraint mechanisms.

UNIDIR’s conference offers three pragmatic opportunities if organizers and participants choose to use them.

First, translate high‑level norms into testable procurement requirements. Ethical principles become meaningful when encoded as minimum technical criteria, audit logs, adversarial robustness thresholds and traceability requirements in contract law. Industry and states must collaborate to design these testbeds now rather than after a catastrophic misuse reheats the debate.

Second, decentralize the epistemic authority. A persistent problem in arms governance is that technical expertise becomes siloed within national laboratories or vendor claims. The forum should press for transparent, reproducible evaluation protocols published in ways that permit independent replication. Peer review matters not only in academia but in battlefield technology assessment.

Third, separate negotiations on inherently unlawful functions from those on acceptable uses. A strategic cleavage that emerges in responsible‑use conversations is between systems likely to violate core humanitarian principles by design, and systems where predictability, controllability and discrimination can be demonstrated. Geneva can help articulate that taxonomy and steer norm development toward prohibiting classes of systems that cannot be made compliant rather than drafting vague bans that invite evasive engineering.

A final philosophical note. We too often treat governance as if it is a problem that can be solved once and then set on a shelf. In truth, governance is iterative. Technologies and doctrines co‑evolve. The job of ethical governance is not to produce a single canonical text but to cultivate resilient procedures that can adapt, audit and correct. UNIDIR’s new forum is an important institutional contribution to that ongoing practice. It is an invitation to rigorous thought, to shared technical work and to the slow art of embedding moral judgment into systems that by design and temptation prefer speed and opacity. If the conference merely reprints the same lists of principles, it will be a missed opportunity. If it produces concrete pathways for procurement reform, testing, and multi‑stakeholder oversight, it will have done indispensable work for the sanity of future commanders and the safety of civilians.

UNIDIR can and should be the laboratory where diplomats, lawyers, engineers and ethicists learn one another’s grammars. The alternative is the quiet drift into a future where machines make decisions that humans cannot fully explain and for which humans cannot be held satisfactorily accountable. That is not a hypothetical that deserves a polite conference plenary. It is an ethical emergency that demands institutional invention and procedural rigor. The Geneva forum is one early test of whether the international community can provide both.