The recent Robotic Combat Vehicle-Light prototyping competition is exactly the sort of exercise the Army needs, but it is not a finish line. In September 2023 the service awarded Phase I prototype work to four teams — McQ Inc., Textron Systems, General Dynamics Land Systems, and Oshkosh Defense — in a combined package worth roughly $24.7 million.
Each vendor was tasked to design, build, and deliver two platform prototypes to support mobility testing and a Soldier touchpoint, with those prototype deliveries slated to inform a downselect in a later phase of the program. The service framed Phase I as rapid prototyping to mature candidate designs toward a fieldable RCV-L capability rather than as a production decision.
Put bluntly, as of the current evaluation window the Army has not “selected winners” in the sense of a production contract. The program intent communicated in public reporting is to use these prototype deliveries, soldier feedback, and government testing to choose a single vendor to finalize designs and build a larger batch of prototypes in a follow-on phase expected in fiscal 2025. That follow-on downselect, not the Phase I awards, is where a single company would walk away with the lion’s share of follow-on prototyping work.
Why the hedged language matters. Industry marketing will happily conflate Phase I awards and prototype deliveries with a “win” because it makes for good headlines. On the ground, however, prototypes are test articles. They prove integration approaches, reveal mechanical and electrical weak points, and expose the limits of autonomy stacks when pushed outside controlled environments. The Army will be looking for vehicles that do more than move and carry kit. It will be measuring maintainability, software maturity, MOSA friendliness, operator workload, and how well autonomy degrades into safe manual control under duress.
From a hands-on engineering perspective there are several predictable failure modes evaluators should watch for during the soldier touchpoints and mobility trials:
- Power and thermal margins. Lightweight robotic platforms tend to be power starved once you add radios, sensors, and directed payloads. Shortfalls show up as reduced mission time and more frequent logistics tails.
- Software brittleness. Autonomy and perception systems perform well in sanitized demos but struggle with dust, glare, sensor occlusion, and adversary electronic attack. Expect a lengthy period of iterative fixes.
- Integration fallacies. Teams that treat sensors, weapons, and autonomy as bolt-on subsystems will be outperformed by architectures that design for integration from the start. Open interfaces matter, but only if they are actually implemented in the prototype deliveries.
- Human–machine interface friction. Soldier touchpoints expose where remote control rigs, displays, and comms create cognitive overload or slow decision cycles.
- Sustainment and reliability. Mean-time-between-failures, ease of field repair, and how common spare parts are will be decisive in soldier preference and program risk assessments.
The program office has signaled an appetite for decoupling hardware and software acquisition to accelerate software capability deliveries. That is the right direction, but it raises its own risks. A modular software roadmap depends on stable, well-documented hardware interfaces and a robust test harness. If vendors hide proprietary interfaces or slip in fragile adapters, the promised software cadence becomes a slog.
So what should the Army do in evaluations to avoid a premature or appearance-driven selection? A few practical points:
- Run prototypes in degraded and contested conditions early and often. Test in dust, reduced visibility, GNSS-denied modes, and under representative EW signatures.
- Measure logistics impact quantitatively. Track battery swap times, repair times, and parts commonality with existing fleets.
- Force interoperability checks with representative brigade systems to validate MOSA claims instead of taking interface claims at face value.
- Require third-party verification of autonomy performance and cybersecurity posture. Red teams for both digital and physical attack modes should be part of evaluation plans.
- Capture soldier workload metrics and task completion times rather than relying on subjective “soldier thumbs up” moments.
Bottom line: the Phase I awards bought the Army test articles and options to learn. They did not, and should not be read as a final endorsement of any single design. The reasonable expectation for the coming year is incremental, empirical learning — not triumphant marketing copy. If the program office holds to an honest, data-driven downselect process focused on mission effectiveness, logistics realism, and software openness, the Army will be in a position to pick a winner that can actually be sustained in the field rather than just demonstrated in a static display.