Few scholarly problems are more canonical than the bargaining model of war. For decades students of international relations have relied on the intuition that conflict is, at bottom, a failure to reach or credibly enforce a bargain. The classic explanations for such failures are well known. Private information and incentives to misrepresent it can produce costly miscalculations. In addition, commitment problems can make otherwise mutually preferable settlements unattainable. These mechanisms remain useful as we ask a new question: what happens to militarized bargaining when artificially intelligent systems are no longer merely analytic aids but active components of coercive diplomacy and crisis management?
To treat this question we must separate three distinct but interacting effects. First, AI changes signal generation and verification. Second, AI compresses decision time and alters the tempo of interaction. Third, AI creates novel avenues for manipulation of beliefs and information environments. Each of these effects interacts with the two canonical sources of bargaining failure in different ways, sometimes stabilizing outcomes and sometimes producing new instabilities.
Signal generation and credibility is the heart of successful coercion. States want to make threats credible while preserving room to back down. Historically this has been achieved by tying hands, paying costs, or positioning forces in visibly costly ways. Artificial systems can serve as both new commitment devices and new sources of ambiguity. On the one hand an AI-mediated capability that autonomously executes a policy with minimal human intervention can appear to harden a commitment. On the other hand the opacity of many AI systems can undercut credibility. If an adversary cannot tell whether an AI will act, or under what internal thresholds it will trigger, they may interpret signals either as more dangerous than intended or as noise. These dual possibilities mean that AI can both strengthen and weaken bargaining leverage depending on institutional practices, transparency, and observable constraining mechanisms. Empirically, militaries are already institutionalizing responsible AI governance to preserve human judgment while accelerating AI adoption, which will shape how signal credibility evolves in practice.
Tempo compression intensifies the classic commitment problem. When decision cycles are long, leaders can step back from brinkmanship, verify information, and calibrate concessions. AI systems can shorten those cycles by automating detection, assessment, and even limited response. Rapid identification of threats is valuable, but it also reduces the time available for verification and political deliberation. Where incentives to preempt exist, faster detection instruments can encourage rash action if political control is weak or institutional safeguards are poorly designed. In short, AI changes the time structure of bargaining games. Reduced decision time raises the relative value of preemption for some actors and magnifies the risks of misperception for all.
AI also reshapes the information environment in ways that go beyond sensor-to-shooter chains. Generative models and automated negotiators can manipulate what adversaries and domestic audiences believe. Work on automated negotiation agents documents that algorithmic negotiators can systematically extract concessions from humans by exploiting cognitive biases or by creating convincing but strategically deceptive offers. At scale, such tactics in statecraft would not be limited to one-on-one bargaining. They can be embedded into influence operations, social media campaigns, and other information tools that alter perceived probabilities and payoffs in a crisis. The result is that the assumption of common knowledge that underlies many simple bargaining solutions becomes harder to satisfy. Adversaries may act on misperceived strengths, or conversely deny real harms by invoking the possibility of synthetic forgeries. These informational pathologies amount to a modern liar’s dividend.
The mix of these mechanisms yields several speculative but plausible dynamics. First, AI could produce brittle stability. In this scenario mutual deterrence persists because each side develops comparable autonomous protocols and mutual fear of escalation restrains direct attack. However the system is brittle because software bugs, adversarial inputs, or miscalibrated thresholds can suddenly convert signaling into kinetic action. Second, AI could produce asymmetric escalation. States or actors who master agentic negotiation tools and information operations may obtain transient bargaining leverage, pressuring rivals to make concessions before they adapt. Third, AI could enable routinized brinkmanship. If actors substitute automated probes for human diplomatic maneuvers, crises may be prolonged or repeated at low levels until a false negative or false positive cascades into open conflict.
These risks are not speculative in principle alone. Leading policy research centers have catalogued how militarized AI use can influence international stability and why confidence building measures may help. Their central recommendation is familiar in form: technological change matters, but institutions and norms will determine whether it destabilizes or stabilizes interstate relations. Concretely, transparency protocols, shared testing regimes, and technical channels for rapid clarification are already being discussed as practical mitigations. The political difficulty is that such measures require trust among rivals precisely when bargaining leverage is sought. That is the paradox.
If one accepts that AI will be integrated into coercive instruments, what governance prescriptions follow? I offer five pragmatic suggestions for states and alliances.
1) Limit agentic autonomy in bargaining-critical domains. Allow automated assistance for information fusion and recommendation. Do not allow unsupervised agentic systems to unilaterally change force postures or issue operational threats in high-stakes crises. This preserves a human node for ex ante and ex post accountability.
2) Build verifiable signaling mechanisms. These may include jointly observed constraint protocols, signed software attestations, and mutually agreed observable behaviors that reduce ambiguity about whether a capability is active. If a state claims to have constrained a system, third party technical audits or cryptographic attestations can make that claim informative.
3) Invest in multi-party verification exercises. Just as nuclear confidence building once depended on data exchanges and inspections, AI-era measures will need stress tests, red-team exercises, and bilateral or multilateral simulations that reveal failure modes before crises occur.
4) Regulate information warfare that directly alters bargaining payoffs. Labeling standards, provenance tools, and rapid rebuttal channels can blunt the strategic advantage of manufactured narratives in crises. Without such measures, the information environment will systematically bias beliefs in ways that promote miscalculation.
5) Institutionalize political responsibility. The symbolic cost of backing down matters. Democracies and other actors should preserve visible human authorship of coercive moves, thus maintaining audience costs and other political constraints that make commitments credible in traditional ways.
Each of these prescriptions is imperfect. They require cooperation under conditions of rivalry. They will be gamed. Yet they matter because they reintroduce the human and institutional variables that bargaining theory shows are decisive. Technological power can alter momentary incentives, but durable peace requires credible commitments that are recognized by others.
Finally, there is a deeper philosophical lesson. Bargaining is not merely an exercise in calculating best responses. It is a conversation about trust. Technologies alter what can be said and how quickly it can be said. They also change who speaks. If the art of diplomacy becomes a choreography between human statesmen and embedded algorithmic agents, we must ask what it means to assign moral responsibility for threats, promises, and mistakes. Without institutions that make humans answerable for machine-mediated coercion, we will migrate toward a world in which uncertainty about agency compounds uncertainty about intentions, and bargaining dissolves into a fog in which even sensible actors can misstep fatally.
AI will not abolish bargaining. It will transform the tools and the tempo of bargaining. The challenge for scholars and policymakers is to ensure that the transformation expands the space for peaceful settlement rather than contracting it. That task is at once technical, political, and moral. It is the sort of task that requires philosophers and engineers, diplomats and soldiers, to learn to speak the same language about risk, verification, and responsibility.