On 19 December 2024 the United Nations Security Council held a formal debate on artificial intelligence and international peace and security. The spectacle was not merely diplomatic choreography. It was a late stage recognition that AI has migrated from being a policy novelty to being an active variable in conflict dynamics. Secretary-General António Guterres framed the moment as urgent, warning that AI is outpacing governance and urging the rapid creation of international guard-rails, an international scientific panel on AI, and a global dialogue on governance. He also reiterated a call to prohibit lethal autonomous weapon systems and argued that certain applications that remove meaningful human control over life and death must be banned.
The meeting exposed both consensus and fracture. Many delegations converged on the proposition that AI can be a force for early warning, humanitarian planning, and even demining. At the same time a number of States sounded familiar but consequential objections. Some warned that governance frameworks must not be instruments of technological hegemony; others insisted that urgent restrictions are required to prevent an AI arms race and the erosion of human control in targeting. The United States, speaking in its role as Council President for December, emphasized the duality of AI as both opportunity and threat and described nascent public private responses such as watermarking of synthetic content and international research coordination. Russia and a few others cautioned against West-led rulemaking. These contours were plainly visible in the Council record.
This debate must be read against parallel diplomatic activity elsewhere in the UN system. The General Assembly First Committee approved a resolution in November 2024 that called for bridging divides over responsible military uses of AI and encouraged cooperation that includes assistance to developing countries. That vote illustrated broad concern across membership about the security, humanitarian and proliferation implications of AI in the military domain.
Why does the Security Council matter in this conversation? The Council is charged with maintaining international peace and security. When new technologies change the character of escalation, deterrence, command and control, or attribution, those effects inevitably become Council business. The inclusion of AI in a Council debate is therefore normative as well as procedural. It signals that AI-enabled capabilities are no longer purely technological curiosities; they are strategic instruments that can alter the balance of risk between States and non-state actors alike. Guterres was blunt about one specific fear: the integration of AI with nuclear systems. That nexus raises existential risks that require immediate political attention.
Yet the Council debate also revealed the limits of multilateral progress. Political fragmentation will be the principal obstacle to any robust regime. The Security Council itself is not immune to great power competition. When regulation implicates national security strategies, export controls, or indigenous industrial capacity, unanimity is unlikely. The result is predictable: declaratory statements, calls for multistakeholder dialogues, and voluntary norms rather than binding limits. Meanwhile, developers and militaries will continue to iterate quickly because battlefield utility and procurement cycles reward operational advantage. The phenomenon is already visible in recent reporting that autonomous and AI-augmented systems are being fielded and experimented with in active conflicts. The technology is no longer entirely hypothetical.
That combination of rapid deployment, attribution difficulties, and political stalemate creates three practical problems that any responsible governance architecture must handle. First, verification and attribution. AI is software heavy, dual-use, and often opaque. Third party verification regimes that work for conventional arms do not translate easily. Second, diffusion and proliferation. Lower entry costs for some AI-enabled capabilities mean non-state actors could obtain or improvise dangerous systems faster than treaty drafters can respond. Third, legal and moral accountability. International humanitarian law remains the framework for judging conduct in warfare, but AI systems complicate determinations of intent, foreseeability, and responsibility when harm results from automated processes.
From a policy perspective there are, in my view, three complementary paths the international community should pursue now. They are modest in ambition but practical and politically defensible.
1) Immediate, narrow bans where the risk calculus is unambiguous. An example is the explicit prohibition of AI automation within nuclear command and control architectures. This is not speculative. The convergence of AI and nuclear systems was singled out by the Secretary-General as a risk that must be avoided. A short, focused prohibition is more likely to gain traction than attempts at panoramic prohibition across all AI military applications.
2) A set of operational norms to preserve meaningful human control over use of force. These can be framed as obligations that require a human in the loop for target selection, enforce testing and certification standards, mandate auditable logs for algorithmic decisions, and require adversary and civilian risk assessments before deployment. Such norms could be operationalized by a UN technical advisory mechanism that assists with verification and capacity building. This approach recognizes the dual-use reality of many AI tools and seeks to constrain their most dangerous military employment while permitting beneficial uses like humanitarian assistance and demining.
3) Investment in inclusive institutional capacity. The Secretary-General and the First Committee both emphasized the need to support developing countries so they are not mere spectators in the rules-making process. That is essential for legitimacy and for reducing the incentive to defect from norms. Practical steps include technical assistance, shared testbeds for safety evaluation, and an international scientific panel that synthesizes evidence on operational risk. These mechanisms should prioritize transparency, objective benchmarks for system reliability, and channels for civil society and technical communities to contribute.
There are moral dimensions that pure policy analysis cannot displace. Delegating decisions about lethal force to systems that lack moral comprehension is not simply a technical risk. It is an abdication of ethical responsibility. Democracies that value rule of law, judgment, and accountability have a normative interest in ensuring that human beings remain the final moral agents in matters of life and death. If we allow technological expedience to reassign that role, we will not only change how wars are fought, we will change what it means to be responsible actors under international law.
Finally, realism must temper idealism. The Security Council debate was a necessary and commendable step. It is not, however, a substitute for concrete instruments, testing protocols, and verification modalities. Aspirational proclamations must be matched by technically informed, politically feasible measures that shore up human control, close obvious catastrophic pathways such as any AI integration with nuclear command structures, and build shared institutions for testing and assistance. If the Council is to do more than register anxieties it will need to translate discourse into durable mechanisms that can operate despite geopolitical rivalry.
History suggests that technology will not wait for complete consensus. The wiser course is to set clear, defensible boundaries now while investing in the institutions that can evolve with the technology. The 19 December debate made that choice visible. The difficult work begins in making rhetoric bind reality, and in ensuring that human judgment remains central to decisions of war and peace.