The Robotic Combat Vehicle program has reached an inflection point. After the Army selected four competitors to build RCV-Light demonstrators in 2023 and received prototype deliveries during 2024, the program was structured to downselect in fiscal 2025 and to have a single team deliver up to nine full‑system prototypes in fiscal 2026 for rigorous test and evaluation. These milestones frame the tests that will follow; they are not mere formality. The 2026 full‑system campaign will be the program’s acid test: it must demonstrate not only mechanical reliability but the human‑machine relationship that will define operational utility.
For clarity, what the Army and industry intend by “full systems” is more than putting weapons and sensors on an unmanned chassis. The objective is an integrated combat system: mobility, powertrain, communications, mission software, remote operator controls, safety interlocks, and mission payloads working together under operational constraints. The Army has repeatedly framed RCV as a means to extend formations, perform scouting and escort missions, and create tactical options that change the timing and geometry of engagements. The 2026 prototypes therefore must be exercised in conditions that approximate those operational aims.
I propose four priorities for the 2026 test campaign. First, autonomy and degraded‑link behavior. Tests must stress the autonomy stack across degraded communications, contested electromagnetic environments, and intermittent sensor feeds. It is insufficient to show waypoint following on a clear day. Instead experiments should simulate realistic comms latency and loss, deliberate jamming, GPS denial, sensor occlusion, and data integrity attacks. Success criteria should emphasize graceful degradation, predictable fail‑states, and transparent operator cues about what the system can and cannot do. Autonomy metrics must be framed in terms of tactical effect and operator trust, not only obstacle clearance rates or average speed.
Second, human‑machine teaming and operator workload. A full‑system test is an opportunity to measure how soldiers actually operate RCVs on typical mission timelines. Laboratory usability scores are helpful but not decisive. The Army should run mission threads where operators manage multiple vehicles, switch between direct teleoperation and higher level tasking, and respond to unexpected events. Physiological and cognitive workload metrics, mission completion times, and misinterpretation rates of autonomy intent should be captured. These data will determine whether a design reduces cognitive load or merely shifts it.
Third, lethality integration and legal‑ethical guardrails. If an RCV carries direct‑fire or anti‑armor payloads, the test plan must validate end‑to‑end control of target acquisition, targeting decisions, and human‑in‑the‑loop approval processes under realistic pressure. The technical questions here are intertwined with doctrine and rules of engagement. Tests should include red‑team scenarios, ambiguous target sets, and time‑critical engagement sequences so that developers and commanders can see how safety interlocks perform when milliseconds matter. Equally important are logs and forensic traces to support later accountability.
Fourth, sustainment, transportability, and lifecycle realism. Early RCV concepts emphasized C‑130 and rotary lift transport, limited weight, and the notion that some variants could be treated as expendable. Full‑system prototypes must be evaluated for repairability in the field, modularity of mission payloads, and the software update pathways necessary for iterative fixes. The Army’s broader acquisition reforms and discussions around right‑to‑repair and contracting terms have direct bearing on these tests; evaluation should include maintenance timelines under constrained spare part conditions and the ability to apply software patches without returning vehicles to contractors.
Beyond priorities, the design of the test campaign matters. Avoid the trap of acceptance testing masquerading as developmental experimentation. Tests should be hypothesis driven. For example, if a claim is that a team of three RCVs paired with a single Bradley increases detection time by X and reduces casualty risk by Y, then the test must be structured to confirm or refute that hypothesis with relevant controls and statistical rigor. Mixed‑initiative behaviors should be isolated and tested in repeatable scenarios so that cause and effect can be inferred. Data collection must be comprehensive and standardized across competing prototypes to enable apples‑to‑apples analysis.
There is also a political and institutional context to acknowledge. High‑level directives announced in spring 2025 call for acquisition reform and a reexamination of force structure and program priorities. These priorities create a dual pressure: programs must deliver demonstrable operational value quickly while surviving heightened scrutiny of cost, sustainment, and strategic relevance. The RCV program will therefore be judged not only on technical performance in 2026 but on how clearly results map to doctrinal needs and budgets. Test design must produce the evidence required by both technologists and policymakers.
Finally, some cautions rooted in history. Emergent systems routinely trade one set of risks for another. A fleet of robotic scouts can reduce soldier exposure at first contact while simultaneously creating new dependencies on fragile logistics and contested networks. The ethical and legal questions around weaponized autonomy will not be settled by a single prototype campaign. However, the 2026 full‑systems tests can and should illuminate the contours of those debates by producing rigorous, transparent datasets and after‑action analyses that separate marketing claims from operational truth.
If the program uses 2026 to run robust, comparative, mission‑relevant experiments, then it will have succeeded regardless of which vendor wins a production contract. If the tests are perfunctory, the program risks producing expensive curiosities rather than tools that change outcomes. The Army, industry, and the public deserve tests that are scientifically rigorous and ethically explicit. Only then can robotic combat vehicles move from promising prototypes to disciplined, accountable additions to the force.