There is a subtle violence in an assertion of knowledge. When a human speaks and is heard, epistemic norms and social histories mediate whether that testimony is granted weight. When a machine speaks in the same register—presenting assessments, classifications, or confidence scores—those social histories do not simply vanish. They are transplanted into new architectures of authority whose moral and psychological consequences are only beginning to be understood.

Philosophers of knowledge have usefully separated two related harms that attend the politics of credibility: testimonial injustice, where a speaker is wronged by being given less credibility because of prejudice, and hermeneutical injustice, where a social group lacks interpretive resources to make sense of its experience. These concepts are not ivory tower curiosities. They are diagnostic tools for reading how automated systems reconfigure who counts as a knower and what counts as intelligible evidence. In short, epistemic injustice maps directly onto the psychology of human trust and deference to machines. [1]

We should stop pretending that an autonomous sensor suite is merely technical. Interfaces, labels, warning lights, and confidence bars are rhetorical devices. They make an epistemic claim. They announce, nonverbally, that the system has seen, inferred, or decided something worthy of belief. In many operational contexts, that announcement is accepted without interrogation. This phenomenon is not new in human factors literature. Work on trust in automation shows that humans form attitudes of trust toward automated aids which then guide reliance; if trust is miscalibrated, operators either underuse or overuse those aids with predictable costs. The psychological mechanism is straightforward: when faced with complexity or time pressure, the human tendency is to offload epistemic labor to the apparent expert. Automation provides an efficient shortcut, and that shortcut can become an epistemic trap. [2][3]

In military settings the trap is amplified. High-tempo, high-risk environments reward rapid acquiescence to apparent expertise. An autonomous classifier that presents a target as hostile with 92 percent confidence does more than present a number; it projects authority. Crew dynamics, organizational incentives, and doctrinal cultures further mediate how readily that projection converts into action. When machines assert epistemic authority they do not simply inform decisions. They reconfigure the moral ecology of responsibility: who is the epistemic source, and who becomes the human who must contest, verify, or authorize that judgment.

This projection of authority has a structural dimension. Scholars of algorithmic authority have shown how algorithms acquire social power not by metaphysical right but by embedding themselves into practices that make their outputs consequential. Once a system is stitched into supply chains, rules of engagement, or the cognitive workflows of operators, its declarations acquire de facto epistemic status. That status is socially produced and institutionally stabilized. It can therefore displace alternative knowledges, silence dissenting sensors, and occlude the possibility of contestation. [4]

A second psychological vector to consider is automation bias. Experimental and field studies across domains from aviation to medicine demonstrate two error modes: omission, where operators fail to act because they assume the automation will; and commission, where operators follow an incorrect automated recommendation despite available contrary evidence. The cognitive economy of attention and the heuristics of authority are central to both. In the crucible of combat, omission errors can be lethal and commission errors politically catastrophic. Psychological research warns that the mere presence of high-performing automation induces a positivity bias—people will assume the system is more capable than it is unless processes actively preserve skepticism. [3][5]

The epistemic costs are not evenly distributed. Recent work on AI and language technologies has shown that systems trained on dominant data ecologies tend to carry techno-linguistic biases, rendering some expressions and worldviews less visible or legible to automated evaluators. When deployed in security or intelligence roles, such erasures manifest as hermeneutical harms: whole communities or modes of reporting can be systematically marginalized because the machine lacks the conceptual frameworks to represent them accurately. The effect is to narrow the range of intelligible testimony that reaches decision-makers, which in turn concentrates epistemic authority in the system and in the institutions that deploy it. [6]

If we accept these diagnoses, the design imperative becomes clear. First, we must design for contestability. Outputs from autonomous systems must be accompanied by affordances that make them inspectable, challengeable, and reversible at operational tempos. Confidence numbers alone are insufficient; provenance, counterfactual traces, and easily available alternative data views are necessary to sustain human epistemic agency.

Second, training and doctrine must explicitly account for epistemic heuristics. Trust is not a private psychological quirk; it is a managed organizational variable. Doctrine that assumes default deference to machine outputs will institutionalize epistemic authority claims and thereby increase the risk of malpractice when systems err. Conversely, structures that require periodic verification, red-team evaluation, and accountability for reliance decisions help calibrate trust to actual system capabilities. [2][3]

Third, the architecture of human-machine teaming should preserve meaningful human control through variable autonomy. The psychology of trust suggests that operators will accept machine authority when they feel it reduces cognitive load without removing responsibility. Variable autonomy models, when coupled to clear accountability lines and explainable decision paths, allow humans to retain epistemic primacy while benefiting from machine-scale perception. Designing teams that rotate authority, that make machine inferences provisional rather than definitive, and that institutionalize the duty to contest when stakes are high will blunt the worst effects of automation-induced epistemic capture. [5]

Finally, we must insist on epistemic pluralism in training data and evaluation regimes. If automated systems are to be trusted as epistemic instruments in plural societies, they must be audited for the hermeneutical blind spots that produce epistemic injustice. Audit protocols should not be tokenistic. They must interrogate which voices the system recognizes, which it silences, and which it systematically misrepresents. Without such interrogations, the machine will appear to know, but its knowledge will be partial and partisan. [6]

To speak bluntly: letting a system claim epistemic authority is a social experiment with lives and norms on the line. Technologists may optimize for performance metrics, and commanders may value speed. Both impulses are legitimate. But when the machine’s claim to knowledge substitutes for social processes of justification and contestation, we have traded democratic epistemic practices for engineered fiat. The remedy is not to reject autonomy. It is to embed autonomy inside practices that honor human epistemic agency: explainability that is operational, contestability that is routine, and accountability that is traceable.

If robotic systems are to reduce human risk in combat they must not simultaneously reduce human responsibility for knowledge. The psychological literature gives us the outlines of the problem. The philosophical literature gives us moral categories to diagnose it. The engineering and doctrinal work required to close the gap is practical and institutional. Absent that work, the claim that a machine “knows” will too often become the final word instead of the opening line in a conversation about evidence, judgment, and responsibility.