We must begin by naming the phenomenon. AI command erosion describes the gradual weakening of a commander’s cognitive, moral, and procedural authority when automated systems are given decisive roles in sensing, targeting, and force employment. It is not a single failure mode but a syndrome: skill atrophy, cognitive offloading, automation bias, moral distancing, and institutional diffusion of responsibility accumulate until the human in the loop becomes a ceremonial presence rather than an accountable agent. This is a human problem as much as a technical one, and it must be addressed as such.

Psychology supplies the mechanisms. Automation bias and complacency make users accept machine output without appropriate scrutiny. Under time pressure or information overload, operators default to automated recommendations; when those recommendations are usually correct they create a brittle habit of deference. Studies across domains from cybersecurity operations to clinical diagnostics demonstrate that automation both improves mean performance and increases the probability that humans will follow incorrect algorithmic advice when it appears authoritative. The military is not an exception; the very features that make AI attractive - speed, pattern recognition, and continuous attention - also encourage cognitive surrender.

The moral dimension follows the cognitive one. Philosophers and legal scholars have long warned of accountability gaps when machines mediate lethal decisions. When decisions are traceable only to opaque models or to layered organizational workflows, the chain of moral responsibility becomes blurred. Operators may feel that they did not really decide, engineers may claim they only provided tools, and commanders may point to policy or algorithmic authority. The result is a moral buffer that reduces feelings of culpability and weakens the internal constraints that historically restrained the use of force. This is not speculative. Frameworks for “meaningful human control” were developed precisely because a socio-technical shift can erode an actor’s ability to explain and justify choices.

Command culture transforms under prolonged automation. Professional judgment is taught in practice through repeated exposure to uncertainty, mistakes, and corrective feedback. If those learning loops are closed out - if AI systems resolve ambiguities for trainees and don’t expose them to edge cases - then tacit skills and ethical reflexes atrophy. Militaries that place too much training emphasis on system supervision instead of decisioncraft risk producing commanders who are technically literate but morally and tactically shallow. Recent doctrinal and PME discussions have already flagged this danger, noting that AI-enhanced professional military education can inadvertently hollow out intuitive, experience-based decision-making unless curricula deliberately preserve messy human judgment.

Design choices interact with psychology in surprising ways. Transparency and explanations seem like obvious mitigations, but empirical work shows that explanations can sometimes increase overreliance rather than foster productive skepticism. Explanations that are shallow, post hoc, or overly persuasive can reassure users even when the system is wrong. Thus human-centered interface design is necessary but not sufficient - the entire socio-technical ecology around the tool must be structured to sustain active scrutiny and to punish mechanical deference.

Institutional dynamics amplify individual tendencies. Organizations reward speed, measurable outcomes, and casualty avoidance. Automated systems that demonstrably reduce friendly losses or increase observable effects will be rewarded in promotion and procurement cycles. Over time, success metrics and career incentives can normalize delegation. The paradox is that what begins as rational risk reduction becomes a structural pressure that incentivizes further delegation, increasing systemic vulnerability and weakening moral responsibility across the chain of command. Accounts of automation risk in national security point to historic incidents where delegation created brittle failure modes; those cases should be read as cautionary precedents rather than anomalies.

What to do about it - practical mitigations. First, bake “active competence” into doctrine and training. That means repeated, high-fidelity practice of degraded-mode decision-making where AI cues are unavailable or deliberately misleading. Simulations must not be convenience runs that prove the automation; they must be pedagogical trials that hone judgment under uncertainty. Second, institutionalize decision windows and veto authorities that are procedurally meaningful - not symbolic. Vetoes must require written justification and after-action review, not checkbox endorsements. Third, design interfaces that promote contestability rather than quiet reassurance: calibrated uncertainty estimates, adversarial challenge modes, and forced counterfactual tasks that make operators generate alternative hypotheses. Fourth, auditability and traceability must be non-negotiable. If a machine recommendation cannot be reconstructed and explained in human terms within a reasonable forensic horizon, it should not be trusted with lethal outcomes. Finally, reform incentives so that promotion and procurement reward demonstrated command judgment under ambiguity, not mere throughput or system uptime.

A final, philosophical admonition. Command is an ethical practice, not a software API. Machines can extend perception, compress decision cycles, and prevent predictable mistakes. They cannot experience guilt, remember a subordinate who died because of a mistaken assumption, or answer a tribunal about a bad choice. Erosion of command is therefore not just an operational risk - it is an erosion of moral agency within the polity that fields force. If we are to keep humans in control in any meaningful sense, we must design institutions and technologies that preserve human judgment as a cultivated competence, not as an optional layer to be switched on when convenience fails. The stakes are both pragmatic and moral: the legitimacy of force depends on it.