The recent proliferation of artificial intelligence into front-line kit and information operations has altered not only how battles are fought but also how war is experienced. Psychological research that once focused on the embodied soldier now confronts a dispersed ecology of actors: remote operators, proxy combatants who wield accessible autonomy, civilian populations subject to new forms of threat, and publics targeted by AI-amplified cognitive operations. The empirical literature and field reports to mid-2025 show predictable continuities and unsettling novelties in the psychology of contemporary proxy conflict.
First, the evidence on remote operators is robust and consistent: screen-mediated killing produces real psychological harms. Multiple empirical surveys and narrative reviews document elevated rates of PTSD symptoms, burnout, moral injury, disturbed sleep, and emotional disengagement among remotely piloted aircraft crews and associated mission staff. These findings were present in early large surveys of United States Air Force RPA crews and have been reinforced in later reviews and qualitative analyses of Reaper and Predator operators. The mechanism is not merely exposure to violent imagery. It is the cognitive dissonance produced by intimate sensory access to killing coupled with physical safety and ordinary daily life back home, a combination that produces moral conflict rather than relief.
Second, autonomy and AI change who experiences those psychological burdens and how. Where AI-enabled automatic target recognition, autonomous navigation, and low-cost attritable drones have lowered the barrier to lethal force, the psychological locus of responsibility diffuses. Field research and policy analyses of the Ukraine conflict through early 2025 document a substantial increase in the operational use of AI-assisted drones and modular autonomy. These systems make strike success more achievable for lower-skill operators and for proxy or irregular forces that can acquire inexpensive autonomy modules. The result is twofold. At the tactical level, more operators face ethically fraught choices without commensurate training or institutional support. At the societal level, civilians live under persistent threat from cheap, semi-autonomous weapons that can be operated at range or by proxies with minimal oversight.
Third, proxy wars are now cognitive wars as much as they are kinetic. Studies of disinformation and algorithmic amplification from the Russia–Ukraine theatre show how adversaries synchronize fear-inducing narratives and deploy automated accounts and generative tools to shape perceptions, to demoralize, and to induce paralysis or overreaction among target audiences. AI accelerates the construction and dissemination of emotionally charged themes, meaning that psychological operations scale more cheaply and with greater personalization than in previous conflicts. The psychological consequence is chronic anxiety, altered risk perception, and the erosion of collective resilience among affected civilian and political populations.
Fourth, there is an interaction between machine autonomy and human cognitive processes that matters clinically and organisationally. Human factors research shows that levels of autonomy and shared-control regimes influence cognitive load, trust, and decision-making patterns among teleoperators. Partial autonomy can reduce routine cognitive burden while simultaneously increasing over-reliance or complacency, and sudden handover events can spike workload and prompt miscalibration of trust. In combat contexts this dynamic amplifies opportunities for error and moral confusion, and it complicates the design of mental-health support and accountability pathways.
Fifth, the human costs to civilian populations are acute and under-studied. By mid-2025 human-rights investigations and field health reports recorded deliberate or reckless drone attacks that aimed to terrorize civilians, producing widespread psychological trauma, disruption of essential services, and community-level fear responses. These reports underscore that inexpensive autonomous or semi-autonomous platforms enable tactics of persistent harassment that have long-term consequences for mental health and social cohesion. The literature still lacks standardized epidemiological studies of these effects, a gap that hinders intervention design.
Taken together, these strands suggest several research and policy priorities. One, we need longitudinal, cross-cultural studies that compare the psychological sequelae of traditional, remote, and AI-mediated violence across combatants and civilians. Two, human-machine interface research must be married to clinical science: trials that manipulate autonomy levels and measure trust, cognitive load, error rates, and downstream moral distress will be essential. Three, mental-health services and resilience programs must be adapted to novel exposures: programs designed for deployed troops cannot simply be transposed to remote crews or to civilians living under drone harassment. Finally, international law and procurement regimes should consider psychological harms as a criterion for deployment and export controls. If affordably autonomous systems make it feasible to adopt tactics of terror at scale, then those harms are not incidental but strategic.
Philosophically, the diffusion of agency that autonomy produces should trouble us. The classic moral psychology of killing presumes an agent proximate to the act. Modern proxy wars create layered agency: algorithm designers, module assemblers, remote operators, financiers, and local proxies all contribute to outcomes. Moral responsibility becomes a network problem. Psychology can illuminate how responsibility is perceived, displaced, or rationalized within those networks, revealing both vulnerabilities to abuse and points for intervention.
Concretely, policymakers and practitioners should adopt three modest but urgent steps. First, require human-factors testing and psychological impact assessments before fielding autonomy modules at scale. Second, mandate training and post-mission debriefing protocols for any operator who uses AI-assisted lethal effectors, with screening for moral injury and PTSD and with pathways to care that recognise the distinct features of screen-mediated trauma. Third, fund community-level psychosocial support and epidemiological research in conflict-affected regions where autonomous systems are in use, and build those findings into arms-control deliberations.
The forces reshaping warfare are not only technological. They are psychological and moral. If we treat AI as merely a force multiplier we will miss its subtler function: it reshapes the lived experience of violence, redistributes suffering, and challenges our categories of responsibility. Robust empirical work must therefore proceed alongside ethical and legal reflection. The sciences of mind and machine must be yoked to the practices of restraint and care if we are to avoid creating conflicts in which the wounds are invisible until they are irreparable.