The governance of artificial intelligence now occupies a peculiar space between moral aspiration and technical contingency. On the one hand there exist well articulated international principles that frame AI as an object of rights, duties, and limits. On the other hand states and firms remain entrenched in competitive dynamics that resist rapid, comprehensive legal constraint. If we ask whether binding, multilateral AI ethics treaties will exist by 2030 we must read both the normative scaffolding already in place and the political conditions that make treaties possible or impossible.
Important interpretive anchors already exist. UNESCO’s Recommendation on the Ethics of Artificial Intelligence supplies a values-first articulation intended for universal uptake, tying AI governance to human dignity, human rights, and practical policy action across the AI lifecycle. This instrument is not a treaty in the technical sense, but it clarifies what a morally grounded legal instrument would need to protect and promote.
Similarly, the OECD AI Principles created a pragmatic baseline for trustworthiness and accountability that many countries and institutions use to harmonize domestic policy. These transnational soft norms are not equivalents of binding law, but they have durable influence because they combine operational guidance with political legitimacy.
At the same time, the United Nations has moved to place governance more squarely on the international agenda. The Secretary-General launched a High-Level Advisory Body on AI in late 2023 to advise on risks, opportunities, and international governance frameworks. That institutional move signals a willingness to explore architecture options that range from enhanced coordination among existing bodies to more centralized, treaty-enabling mechanisms.
There is also precedent in specialised treaty paths. Discussions at the Convention on Certain Conventional Weapons show how states confront technological novelty through the existing disarmament architecture. Since 2017 the CCW process has been a venue for states to negotiate possible measures related to lethal autonomous weapon systems, and in 2023 the Group of Governmental Experts continued that work. The lesson is twofold. First, domain specific, issue-limited treaties or protocols are possible. Second, the pace of such negotiations is methodical and often slow when unanimity or consensus are required.
Taken together these facts suggest three plausible scenarios for 2030.
1) Fragmented hard law. In this scenario, binding treaties exist by 2030 but they are domain specific. A protocol or treaty addressing military applications and lethal autonomous weapons might be the most likely candidate because existing disarmament fora are already handling the topic. Parallel treaties or binding agreements could also arise in specific civilian sectors where cross-border harms are acute, for example health data governance or biometric misuse. These treaties would be partial, technically narrow, and variable in ambition.
2) Convergence around hybrid governance, not a single comprehensive treaty. Here the international system leans on an assemblage of instruments: UN-led advisory bodies and compacts provide political legitimacy; OECD and similar bodies supply interoperable standards; regional regulators write binding domestic law that exerts extraterritorial effects; and industry codes and certification regimes provide operational compliance. The end state is regulatory interoperability more than a global juridical pact. The UN deliberative process launched in 2023 signals that such hybrid architectures are being actively considered.
3) Treaty stalemate. This is the pessimistic outcome. Geopolitics, differential technological capacity, and the desire to preserve strategic advantage prevent any substantive, enforceable treaty from emerging. States will continue to endorse principles while preferring voluntary cooperation, norms of restraint, and confidence building measures. A precipitating crisis could change this trajectory, but absent such a shock the equilibrium may favor nonbinding frameworks.
Which outcome is most likely? My judgement is that the second scenario is the modal outcome with elements of the first appearing in specific domains. The reasons are structural. Modern AI governance is both multi-scalar and multi-sectoral. The motive for a single, omnibus treaty is weak while the practical incentives for sectoral treaties and interoperable soft law are strong. Moreover, technical complexity favors modular responses where specialists can write rules that reflect technical realities rather than broad legal abstractions.
If the international community accepts a hybrid path the content of future binding instruments will still reflect core ethical demands. Any credible treaty or protocol will need to do at least four things. First, anchor obligations in human rights and human oversight such that life critical decisions preserve final human determination. UNESCO’s Recommendation explicitly foregrounds these priorities. Second, provide verifiable obligations or reporting duties so that states and actors can be held to account in an auditable way. Third, include mechanisms for technical cooperation and capacity building so poorer states are not merely objects of regulation but partners in governance. Fourth, create realistic verification and confidence building measures modeled on arms control, export controls, or sectoral inspection regimes rather than on unverifiable grand promises. The UN advisory process has explicitly begun to explore such governance architectures and institutional analogies.
What would accelerate a treaty outcome by 2030? Major drivers would include a widely publicized catastrophic misuse of AI, a clear arms race dynamic that threatens escalation, or a decisive coalition of states led by either a regional bloc or a trade collation that demands binding harmonization. What would impede it? Persistent distrust among major powers, effective industry avoidance through technical workarounds, and the sheer difficulty of writing enforceable rules for systems whose capabilities evolve rapidly.
Policy implications for those who care about the ethical future of AI are immediate. First, promote plural tracks: work both in specialised disarmament and in multistakeholder standard setting. Second, invest in technical verification tools now so that treaty negotiators have credible instruments to inspect and audit AI systems. Third, prioritize inclusivity in rule design so that legitimacy is not sacrificed for speed. Finally, cultivate what I call reflective restraint: do not confuse the moral clarity of universal principles with the legal form those principles should take. The two are complementary but not identical.
By 2030 we will most likely possess a layered governance architecture in which binding accords exist in targeted areas, and a web of stronger soft law and regional legislation governs broader practice. That architecture will be imperfect, contested, and provisional. It will nonetheless offer a pragmatic path out of the current moral-technical limbo where good intentions are abundant but enforceable obligations remain scarce. The proper task for scholars and practitioners is to shape those imperfect instruments so that they carry the moral freight required to protect human dignity in an age of powerful automations.