We live in an uneasy paradox. Modern military planners and technologists celebrate the promise of AI to reduce cognitive load, accelerate sensing to effects, and lower risk to human life. At the same time frontline personnel may refuse to accept machine recommendations when those recommendations err even once. That behavioural phenomenon, variously called algorithm aversion or AI denial, is not mere contrarianism. It is a psychologically grounded response with operational consequences that deserve careful theoretical and design attention.
Behavioral science shows a robust bias: humans punish algorithmic error more severely than comparable human error. In experiments where participants observed an algorithm and a human forecaster make identical mistakes, observers lost confidence in the algorithm faster and were more likely to abandon it despite superior aggregate performance. This is the core of algorithm aversion and it poses a direct challenge for adopting AI in high stakes settings like combat, where a single visible mistake can collapse trust in otherwise valuable tools.
The national security context intensifies these dynamics. Recent research that investigates automation bias and algorithm aversion in international security tasks finds a nonlinear relationship between prior exposure to AI and the probability of either overreliance or rejection. Individuals with little AI experience are more likely to be algorithm-averse, while those with moderate exposure may be prone to automation bias, that is, overtrust. This means that institutional experience, training, and the specific task all shape whether personnel will deny or over-depend on AI advice. The effect is not merely individual. Doctrine, procedures, and command culture embed incentives that either amplify or dampen these biases.
Why should militaries care about deliberate or emergent AI denial? The answer is pragmatic. Underuse of capable decision aids can slow reaction times, increase cognitive burden, and produce predictable errors. Overuse produces different risks. Automation bias can allow false positives to propagate unchecked, and institutionalized faith in a system can generate systemic failures. Both outcomes degrade resilience. Psychological resilience in combat is not the same as stubborn rejection of tools. Resilience is the capacity to use tools judiciously, to recognize their limits, and to recover from their failures. Achieving that requires design work, doctrine, and training that treat human judgement and AI outputs as complementary components of a socio-technical system.
Empirical work points to practical mitigations. First, transparency and intelligibility matter. Exposing operators to how a model reasons and to appropriate uncertainty estimates reduces the perception that AI mistakes are inscrutable. Second, demonstrating that an algorithm can learn and improve over time diminishes aversion. Experiments show that users are more willing to accept algorithmic assistance when the system can be seen to correct itself and adapt. Third, dynamic transparency that adjusts the amount of explanation provided based on operator workload and expertise helps calibrate trust in the moment. These interventions are not panaceas, but they form a toolbox for reducing unwarranted denial while guarding against automation bias.
Design and institutional recommendations follow from the preceding principles. Engineers should build interfaces that make error modes visible and that present confidence and provenance along with recommendations. Training should combine simulated exposure to both correct and incorrect AI outputs with debriefs that explain why the system erred and how it will change. Doctrine should formalize when human override is required and when AI advice should be privileged, and it should require after action logging so that decisions and the rationale for denial or acceptance are auditable. Finally, procurement must treat human factors and behavioural research as mission critical, not as optional extras to be tacked on after deployment. The literature on automation bias and algorithm aversion makes clear that technical performance alone will not determine operational impact.
There is an ethical and philosophical point that must not be elided. Psychological resilience should permit justified denial of automated recommendations. Sometimes rejection is courageously right. The problem arises when denial is irrational or when it is socially contagious and undercuts collective performance. The aim is therefore a calibrated scepticism that preserves moral agency and responsibility while allowing the best judgments to emerge from human-AI teams. That balance requires humility from technologists and rigor from commanders. It also requires that accountability structures reward justified override and penalize negligent dismissal.
In short, AI denial in combat is not a simple error to be stamped out. It is an expression of a deeper set of human responses to machine fallibility and institutional incentives. Effective resilience is not the absence of doubt. It is the ability to doubt well, to design systems that earn and deserve trust, and to create operational cultures that treat denial as a legitimate tactic when warranted and as a risk when reflexive. If military organizations commit to multidisciplinary design, rigorous training, and doctrinal clarity then they can turn the paradox of denial into an asset rather than a liability.