NATO’s public turn to artificial intelligence has been a study in institutional caution. The Alliance set out a pragmatic set of Principles of Responsible Use in 2021 that anchor AI adoption to lawfulness, accountability, explainability, reliability, governability, and bias mitigation. These principles are a useful baseline for interoperability and risk management, but they do not by themselves answer the harder normative question: how will NATO ensure that the use of AI preserves the inviolable dignity of the human beings who will remain at the centre of any legitimate security order?
The phrase human dignity is not merely rhetorical. International normative instruments and regional bodies treat it as a foundational constraint on technological design and deployment. UNESCO’s Recommendation on the Ethics of Artificial Intelligence frames respect for human dignity and human rights as the compass for all AI systems, insisting that design and lifecycle choices should protect people’s physical and mental integrity and their capacity for moral agency. The Council of Europe and related instruments have echoed similar prescriptions: where machine systems risk degrading standing moral status or substituting for human judgment in domains that implicate dignity, states must act to prevent that outcome. These texts shift the discussion from “what works” to “what we may ethically permit.”
NATO is not blind to these concerns. Its 2024 Biotechnology and Human Enhancement strategy explicitly invokes human agency and the preservation of innate human dignity when the technologies in question interface directly with persons. That language is notable because it ties the Alliance’s ethical posture to the lived experience of service members and to limits on what may be done to or through human bodies and minds. Yet that explicit concern for dignity appears in the BHE context rather than the general AI strategy, where dignity is implicit at best and mediated through lawfulness and accountability. The asymmetry matters. When dignity is made explicit in one policy strand but only quietly implied in another, operational choices in the field are left vulnerable to narrow technical rationales that marginalize moral claims.
Why does this distinction demand attention? Because the practical locus of harm is where doctrine meets sensors and algorithms. An AI that accelerates targeting decisions, refines biometric screening, or automates influence operations may pass a reliability test yet still degrade human agency by normalising choices that should remain human. Policies that focus on explainability, traceability, and bias mitigation are essential, but they do not by themselves guarantee that certain tasks will be reserved to persons or that persons will retain the meaningful capacity to dissent, refuse, or direct. Human dignity requires both protective ceilings and affirmative commitments: ceilings that prevent systems from performing tasks that inherently objectify or subordinate persons and commitments that protect the deliberative space of human judgment.
Civil society and disarmament advocates have long sounded this alarm in blunt terms. Campaigns for prohibition of fully autonomous lethal weapons rest on the premise that delegating life and death to algorithms is a categorical violation of human dignity. The UN First Committee and a coalition of NGOs have pushed for international instruments that enshrine meaningful human control and forbid designs that would systematically dehumanize targets or degrade the moral agency of those who decide to employ force. NATO’s technical and legal teams must therefore engage not only engineers but also ethicists, human-rights experts, and affected communities as part of any genuine operationalisation effort.
Operationalising dignity inside a military alliance is not a call to romanticize the battlefield or to ignore strategic realities. It is a demand for clarity about which functions can rightly be delegated to machines and which must be retained by accountable human agents. Practically this means at least four institutional changes.
First, policies must calibrate autonomy thresholds by task, not by marketing label. NATO should adopt use-case based redlines that reserve target acquisition and lethal engagement decisions to humans under defined conditions of risk and irreversibility, while allowing bounded autonomy for sensing, logistics, and non-lethal force protection where human oversight is demonstrably effective. These redlines should be informed by ethical impact assessments and be auditable.
Second, testing and certification regimes must incorporate dignity-sensitive metrics alongside safety and robustness. Technical testbeds and DIANA-affiliated centres should be mandated to evaluate whether a system alters a user’s deliberative capacities or induces procedural abdication. Certification must cover socio-technical effects such as the erosion of refusal rights, pressure to “trust” automated outputs without scrutiny, and the creeping reassignment of moral responsibility from officers to code.
Third, governance needs institutionalised channels for legal and moral challenge. NATO’s PRUs rightly call for responsibility and accountability, but those principles must be operationalised with accessible appeal processes, legal review, and external oversight where operations affect civilians or involve experimental technologies. Transparency and traceability are prerequisites for dignity, because the capacity to contest a decision presupposes information and an enforceable path to redress.
Fourth, the Alliance must explicitly align procurement and partnership practices with dignity-preserving clauses. Contracts, grants, and research agreements should require human-rights due diligence, prohibitions on certain classes of autonomy in lethal contexts, and explicit commitments to informed consent where technologies interact with personnel in bodily or cognitive ways. The BHE strategy’s informed-consent principle offers a model for how NATO can embed respect for persons into acquisition practice; similar clauses adapted for AI can prevent function creep and moral slide.
These are not mere bureaucratic preferences. Failing to make human dignity an explicit operational constraint risks systemic dehumanization: the routinization of delegating decisions away from moral agents, normalization of lesser standards when civilians are affected, and diffusion of responsibility that leaves no accountable human standing. NATO’s legitimacy depends on public trust in its moral compass as well as its deterrent effect. If the Alliance is to lead in the responsible military use of AI, it must make dignity visible in the same way it makes reliability visible.
Finally, NATO should use its convening power to bridge soft law and hard practice. It can do this by sponsoring interoperable dignity impact assessment tools, by funding interdisciplinary research into the lived consequences of military AI, and by creating fora where engineers, lawyers, clerics, and ethicists jointly deliberate on redlines. This is not an academic nicety. It is the only way to translate abstract commitments into practices that preserve moral persons within the machinery of modern defence.
In short, NATO has accepted the need for responsible AI. The next and more difficult step is to embed human dignity as an operational constraint rather than an occasional invocation. Doing so will require institutional creativity, legal backbone, and moral courage. The alternative is to drift into a future where efficiency and tactical advantage quietly eclipse the normative commitments that justify the use of force at all. That is a risk we should not allow our democracies to run.