The convening of ethicists, generals, engineers and diplomats under the same roof is not in itself a victory. It is, at best, a necessary condition of civilized governance and at worst a ceremony that lets vested interests declare restraint while business as usual continues. Recent multistakeholder efforts make this tension plain: forums framed around security, even when nominally ethical, reconfigure questions of responsibility into problems of risk management and operational continuity. This reconfiguration matters because it changes which harms are counted and who is empowered to prevent them.

The Roundtable for AI, Security and Ethics known as RAISE began as a deliberately plural project in March 2024, when UNIDIR and partners opened a space for cross-regional, multisectoral dialogue. That launch signalled an awareness that technical and normative expertise must co-evolve if AI in the security domain is to be governed at all. But awareness is not architecture. To matter, dialogue must generate concrete, institutional commitments that survive the churn of bureaucratic priorities.

RAISE matured through a second roundtable in September 2024 that deliberately timed itself alongside the Responsible AI in the Military Domain summit. The juxtaposition was revealing. One forum emphasized coalition building across civil society, industry and research. The other pressed states and militaries to convert abstract commitments into operational safeguards. Together they exposed a recurring failure mode: ethical principles articulated in abstract frequently evaporate when translated into procurement schedules, rules of engagement and battlefield ambiguity.

The REAIM summit in Seoul crystallised both the promise and the limits of current governance practice. That meeting concluded with a “Blueprint for Action” that roughly sixty states endorsed, stressing human involvement in critical decisions and the need for assessments and confidence building. The endorsement is important. It is also nonbinding. The political value of such a blueprint therefore depends on follow through, independent verification and the political cost of noncompliance. When major actors decline to sign or to embed commitments in domestic practice, the blueprint risks becoming a rhetorical shield rather than a standard.

Practitioners who work at the intersection of autonomy and lethality have offered blunt technical counsel that should temper any complacency. Panels at security summits and research briefings repeatedly emphasise that current AI systems are brittle, that small distributional shifts can produce catastrophic misbehaviour, and that engineering margins of error are not the same as ethical forgiveness. Lessons from the decades-long debate on autonomous weapons apply: technological novelty does not excuse the absence of rigorous life cycle governance, auditability and assignment of liability. If we do not build institutional scaffolding that makes developers, operators and commanders jointly accountable across the system life cycle, then ethical language will be used to cover what are in effect capability gaps.

There is a deeper conceptual hazard. Framing the problem primarily as “security” encourages prioritization of resilience, secrecy and speed. Those priorities are sometimes the right ones. They also invite a narrowing of ethical inquiry. Questions about bias, civilian privacy, information integrity and proportionality can be treated as secondary to the imperative of maintaining tactical advantage. This narrowing matters because the harms of militarised AI rarely respect doctrinal categories. Surveillance systems biased against vulnerable populations, automated targeting aides that amplify error, or informational operations that corrode democratic trust are harms with both strategic and moral consequences. An ethics forum that sits within a security conference therefore needs structural protections to prevent ethical considerations from being co-opted into a checklist for risk acceptance.

What would such structural protections look like in practice? First, insist on multi-party, transparent testing regimes. Second, mandate independent incident reporting and open redress channels so that errors produce public learning rather than private correction. Third, require legal and normative clarity on the locus of responsibility so that downstream commanders and upstream developers cannot point at each other when systems fail. Fourth, embed ethicists and humanitarian actors within procurement and deployment decision chains, not merely on panels where they serve as legitimating witnesses. Finally, invest in institutional capacity building for lower and middle income states so that global governance is not simply a set of standards imposed by technology exporters but a truly reciprocal set of practices. These measures are not technocratic luxuries. They are the conditions under which the moral claims of an ethics forum can be translated into operational reality.

We should also be candid about what an ethics forum cannot do. It cannot substitute for international law or for the political will to bind states to obligations. Nor can it fix the structural incentives of firms that profit from rapid deployment and opaque models. Ethics must therefore be coupled to policy instruments that have enforcement bite: export controls, procurement standards, verifiable norms of behaviour and, when necessary, sanctions. Without those levers, ethical pronouncements will remain a balm rather than a brake.

The Global Conference on AI, Security and Ethics now planned in Geneva signals an opportunity to move from ad hoc statements toward durable mechanisms. If that potential is to be realised, the conference must make hard institutional choices rather than accumulate soft consensus. It must design forums in which ethical scrutiny has procedural weight, in which technical evaluation is public and replicable, and in which the voices of those most likely to suffer harm can temper the claims of technocratic optimism.

An ethics forum attended by security professionals is necessary and welcome. But it will be consequential only if ethics is not an appendix to security but a governance core. The task before us is to build moral architecture into systems that were designed to be morally agnostic. That is arduous work. It is also the only plausible way to ensure that human dignity and human judgment remain the benchmark against which military uses of AI are judged.