Epistemic authority names the capacity to settle what counts as knowledge, credible evidence, and reliable interpretation in a domain of public importance. In the context of artificial intelligence this is not a philosophical abstraction. It is a practical lever that shapes law, policy, operational practice, and, ultimately, who bears responsibility when automated systems cause harm. International law has always relied on epistemic authorities — jurists, scientific panels, and technical experts — to translate empirical fact into legal conclusions. The problem for AI is that the sources and mechanics of that authority are now diffuse, contested, and in some cases automated.

The international governance ecosystem already contains multiple, competing loci of epistemic authority. Multilateral instruments and normative soft law provide one set of reference points. UNESCO’s Recommendation on the Ethics of Artificial Intelligence, adopted by its General Conference, is an example of a global standard-setting instrument that seeks to orient states toward shared ethical principles for AI. Such instruments anchor debates about human rights, transparency, and human oversight in recognized international forums.

Technical standards bodies and national technical agencies provide another. The United States National Institute of Standards and Technology published an AI Risk Management Framework in January 2023 to operationalize trustworthiness and risk management for AI systems. That document performs a very different epistemic work than a values declaration: it translates risks into lifecycle processes, measurement concepts, and governance functions that can be implemented by organizations. Where UNESCO offers normative compass points, NIST and similar bodies offer instruments for making the world legible and actionable to engineers and lawyers.

Intergovernmental policy instruments such as the OECD AI Principles function as a third register. The OECD’s Principles, and their uptake through forums like the G20 and the OECD.AI Policy Observatory, have become an infrastructural interlocutor between high‑level values and national or sectoral law. By periodically updating definitions and guidance, these instruments aim to retain technical relevance while maintaining a claim to represent shared state commitments.

Beyond state and standards actors there are two other, decisive sources of epistemic authority in AI. One is private industry and the laboratories that develop frontier models. Companies that train massive models and control large datasets craft the empirical premises that many regulators and militaries must accept or rebut. The other is the United Nations system and its ad hoc expert assemblies. In 2023 and 2024 the Secretary‑General convened a High‑Level Advisory Body on AI whose final report, Governing AI for Humanity, recommended creating distributed scientific capacity and an international scientific panel on AI to bridge knowledge gaps between states and technical communities. This recommendation recognizes, explicitly, that questions of epistemic authority are also questions about inclusion and legitimacy.

Why does epistemic authority matter for international law? There are three connected reasons. First, law requires facts. Doctrines from state responsibility to the law of armed conflict depend on determinations about capabilities, causal chains, and foreseeability. When those determinations are mediated by opaque models or proprietary evaluation procedures, courts and treaty bodies face an evidentiary problem: how to evaluate technical claims in adversarial or inter‑state contexts. Second, allocation of accountability presumes contestable knowledge. If engineers, a private laboratory, or an algorithmic black box become the de facto final arbiter of how a system behaved, remedial and prosecutorial pathways are hollowed out. Third, legitimacy and equity are at stake: epistemic authority that excludes perspectives from the Global South, civil society, or affected communities will embed epistemic injustice into international norms and norms enforcement. These are not hypothetical risks. The UN advisory process expressly described the concentration of AI power and knowledge as a structural problem for global governance.

The field of climate governance offers a sobering analogue. The Intergovernmental Panel on Climate Change consolidated scientific knowledge in a way that made international law and diplomacy tractable, but it also confronted critiques about representation, epistemic hierarchy, and the limits of consensus science. Lessons from that experience are instructive for AI: a single authoritative panel can simplify policy, but a single panel can also ossify particular epistemic frames and marginalize other knowledge systems. A plural, reflexive architecture is difficult to design but necessary.

Military applications sharpen these problems. The International Committee of the Red Cross has urged states to adopt legally binding limits on autonomous weapon systems and emphasized human supervision and predictability as legal and ethical prerequisites. When weapons systems embed machine judgments into targeting or engagement loops, the epistemic claims made by system designers about discrimination, reliability, and situational understanding are precisely the claims that will determine compliance with the law of armed conflict. If international adjudicators are forced to accept unverifiable technical claims, accountability suffers.

What practical reforms would better align epistemic authority with principles of legitimacy, accountability, and pluralism? First, institutionalize contestability. Standards and assessment processes must be auditable by independent parties with rights of cross‑examination in regulatory and judicial venues. Second, invest in distributed scientific capacity. The UN advisory report’s proposal for an international scientific panel and capacity development networks is a step toward reducing knowledge asymmetries between states and firms and toward making expert advice less monopolized. Third, build interoperable knowledge infrastructures. A “standards exchange” model that maps definitions, test suites, and incident data would help translate between national frameworks, technical claims, and legal criteria. Fourth, require forms of procedural transparency that are meaningful. Mere publication of model cards or high‑level principles is insufficient. Transparency must enable reconstruction of key claims about reliability, training data provenance, and testing regimes in ways that courts and treaty bodies can evaluate. These recommendations mirror the hybrid technical and normative agenda already pursued by bodies such as the OECD and national standard agencies, but they intentionally bind that work to principles of international legitimacy.

Finally, any institutional design must accept an uncomfortable truth. Epistemic authority is not neutral. Whoever controls the instruments of measurement, testing, and data curation gains agenda‑setting power. International law can attenuate the risks of captured epistemic authority, but only through plural, contestable, and well‑resourced knowledge institutions that sit alongside legal instruments. The policy proposals emerging from the UN advisory process and from established standards organizations point in that direction. If international law aspires to regulate AI rather than to borrow its premises from proprietary laboratories, it must reclaim the epistemic terrain on which lawful, moral, and strategic judgments are made. The alternative is a future in which legality is decided by models more than by judges, and in which doubt about facts becomes a permanent advantage for the powerful.