The past few months of public demonstrations and after-action material from U.S. and allied experiments have done something useful and uncomfortable. They have replaced the comforting marketing narrative of “robots that keep soldiers safe” with a clearer technical ledger: advanced ground robots remain fragile in contested environments, and the very tests meant to prove their readiness also expose the seams through which failure arrives.
What we have seen on parade is not failure of imagination but failure of integration. Large-scale experiments such as the Army’s recent human-machine integration exercises and the Project Convergence events have shown a proliferation of platforms — legged quadrupeds, tracked and wheeled UGVs, autonomous convoys and sensor-laden logistic carriers — operating together to extend sensing and reduce soldier exposure. Those demonstrations are real and valuable, and they are rightly celebrated for the progress they represent. They are also public and therefore pedagogical: every demo is an open lesson to engineers and to adversaries alike.
The technical record beneath the veneer of capability is instructive. Modern autonomy depends on sensor fusion, resilient navigation and robust communications. Yet the research literature has, for several years, documented concrete attack vectors that undermine those primitives. LiDAR and mmWave radar systems can be spoofed or saturated; adversarial lasers and purpose-built spoofers have been shown in controlled experiments to inject phantom obstacles or erase legitimate returns. Similarly, relatively unsophisticated GNSS jamming and electronic warfare can strip away the positioning and comms that UGV autonomy assumes. When tests and demonstrations occur in permissive conditions, those limits are hidden. When they are pressured by contested-spectrum effects or by adversarially targeted sensor attacks, the machine’s brittleness becomes visible.
Engineers address these problems with the language of resilience. Resilience quantification for military UGVs is an emerging field precisely because survivability is not a single statistic. It is a suite of interlocking trade-offs: sensor diversity versus cost and power draw; hardened communications versus size and weight; mechanical redundancy versus payload capacity. Recent work on resilience modelling shows that without deliberate design to mitigate single points of failure, a vehicle that performs admirably in benign tests can catastrophically fail under realistic, compound stresses. The implication is not that robots cannot improve. It is that survivability must be engineered into the system from sensor choice to architecture, and validated under corrupting influences rather than ideal conditions.
There is also a doctrinal and acquisition problem. Program offices and experiments have tended to treat UGVs as capability demonstrations or force multipliers rather than as systems-of-systems that must endure kinetic and non-kinetic stressors. The behavior of an autonomous convoy on a dry test range does not prove that an equivalent formation will survive in a dynamic battlefield where jamming, spoofing, small-arms fire, and roadside explosives coexist. Public tests that highlight mobility and autonomy, but which do not subject the platform to contested-spectrum or lethal-force stressors, risk creating a false sense of security. Demonstrations are necessary but insufficient.
We must also be honest about the attritability trade-off. Many military planners argue that smaller, cheaper UGVs can be treated as expendable — they will be sent into the highest risk and thus need not be highly survivable. That is a valid concept but it does not absolve designers of responsibility. Lower-cost platforms still require predictable failure modes, graceful degradation, and safe failover so that their loss does not cascade into larger tactical problems. Tests that expose how a single sensor or a single link failure cascades into total mission collapse are especially valuable. They teach how to design bounded failure and how to limit collateral operational harm. Practical survivability is as much about predictable, constrained failure as it is about raw toughness.
Finally, these technical vulnerabilities have ethical and legal echoes. The International Committee of the Red Cross and other bodies have repeatedly underscored the humanitarian and legal risks associated with autonomous systems used to apply force. Publicly exposed tests that show perception failure modes and communications fragility are not only engineering data; they are evidence that human-in-the-loop constraints, meaningful human control, and accountability frameworks remain necessary design constraints. If survivability testing reveals brittleness, the response cannot be merely to push systems into operational use and hope doctrine or human oversight will compensate. It must be to harden architecture, to require more rigorous contested-environment testing, and to bind deployment to use-cases where human control and oversight are demonstrably effective.
Practical recommendations are straightforward even if politically uncomfortable. First, staged tests must include contested-spectrum scenarios, LiDAR/radar spoofing and GNSS denial as routine elements of certification. Second, resilience metrics must be standardized and baked into procurement requirements so that a vehicle’s “survivability budget” is auditable across sensor suites, communications and mechanical subsystems. Third, demonstrations must resist the impulse to prioritize spectacle over stress testing; a parade of robots is not a proof of combat readiness. Fourth, doctrine and acquisition must align: if systems are intended to be attritable, the force will need the logistics, replacement cadence, and clear rules of engagement that accept and manage that attrition.
The tests that have been exposed in public forums should be treated as an opportunity. They are a corrective to hype and a prompt to rigorous engineering and ethical reflection. The future battlefield will include robots. Whether those robots reduce risk or compound it will depend less on marketing and more on the willingness of militaries, industry and the research community to confront the ugly, teachable details that survivability tests reveal. Make no mistake: the machines are getting better. The harder question is whether we are getting the tests, the standards and the doctrine right before those machines are asked to keep humans safe under fire.