We are living through a technological accelerando whose cadence is set as much by algorithmic learning curves as by political will. The era of machine-on-machine combat is not science fiction. It is an emerging tactical reality composed of three converging trends: increasingly capable autonomous agents that can outperform humans in narrow tasks, mass-producible low-cost loitering munitions and drones that can be launched in salvo, and naval and ground unmanned platforms designed to operate in contested spaces with limited human oversight. Each trend by itself changes war modestly. Together they reconfigure the logic of engagement.

Consider the air domain. In an emblematic experiment the DARPA AlphaDogfight Trials demonstrated that an AI agent could defeat a skilled F-16 pilot in a within-visual-range simulated dogfight, winning five straight engagements in a constrained environment. The experiment was not a portent of immediate pilot obsolescence. It was a proof of principle that reinforcement-learned agents can master maneuvering, decision timing, and threat tradeoffs in a narrow tactical problem set. The implication is clear: AI can already contest, and in some settings exceed, human performance at discrete combat tasks. That capability is the seed of true machine-versus-machine aerial duels when paired with autonomous airframes.

At sea and on the littorals the United States and partners have been experimenting aggressively with unmanned surface vessels and distributed networks of sensors, connectivity, and command layers intended to multiply visibility and firepower while reducing risk to crews. The Navy sprints toward large and medium USVs that operate with operator-on-the-loop or operator-in-the-loop concepts, and task forces have begun experimenting with mesh networks and cloud-enabled sensing to enable coordinated behavior among many platforms. Those platforms are not inert test articles; they are being fielded into exercises and conceptual operations that imagine massed unmanned fleets working in concert. The technical and doctrinal scaffolding for robot-on-robot maritime engagements is being built even if weapons employment policies remain cautious.

The war in Ukraine provides an immediate, painful laboratory for what robotized conflict looks like when cost, mass production, and decentralization combine. Both sides have used loitering munitions and tactical drones at scale to conduct reconnaissance and strike missions. The massing of low-cost suicide drones and small unmanned aircraft has helped normalise the idea of offensive unmanned salvos and stimulated countermeasures that are themselves increasingly automated. Viewed from the present, these campaigns read like early chapters in a future in which unmanned assets are deliberately arranged in layered, mutually supporting formations to engage other unmanned systems. Cheap attritable systems change the cost calculus of attrition and force structure.

Policy and law will not be mere epicycles to technology. They shape how machines are permitted to use force and how military organizations design human oversight. The U.S. Department of Defense has updated internal guidance to define autonomy categories and to insist on appropriate levels of human judgment over the use of force, embedding testing, evaluation, and legal review into acquisition and employment. Internationally, civil society and many states press for treaties or limits on lethal autonomous weapon systems, arguing that delegation of life and death decisions to machines violates basic humanitarian norms. These debates are not abstractions. They will determine whether future robot-on-robot fights include meaningful human vetoes or whether they are permitted to escalate autonomously.

That said, technological possibility does not equal operational inevitability. There are four practical brakes worth noting.

1) The electromagnetic and sensing environment. Electronic attack, GPS denial, and sensor spoofing can collapse an autonomous agent s situational awareness. Machines are only as good as the data pipelines they trust. When those pipelines degrade an autonomous system may fail safe, degrade performance, or behave unpredictably.

2) The cost-exchange ratio. Cheap swarms can impose costs on expensive defenses but only if logistics, launch capacity, and sustainment permit repeated employment. If defenders adapt with cheaper hard-kill or electronic countermeasures, the presumed advantage of massed robots can be erased.

3) Rules and approvals. Internal policy, export controls, and international pressure can slow fielding of systems permitted to select and engage without human intervention. Design choices taken today reflect regulatory constraints as much as engineering ones.

4) Trust and human-machine symbiosis. Even when autonomous agents perform well in narrow tasks, commanders must trust those agents to act in complex, ethically fraught contexts. Experimental results show that AI can be aggressive and effective in constrainted simulations. Translating that into multi-domain combat with incomplete information and ambiguous targets is a different proposition.

If we accept that robot-on-robot engagements are plausible within a near-term horizon, what strategic consequences flow from that acceptance? First, speed will increase. Decision loops compressed by automation create tempo advantages for the side that can safely close and exploit autonomous cycles without inviting catastrophic miscalculation. Second, attribution and escalation pathways blur. When an autonomous swarm launches and a different swarm counter-engages autonomously, it is harder to parse intent in real time. Third, proliferation risk rises. The low entry cost of many drones and loitering munitions means that non-state actors and smaller states can participate in robotized contestation, raising the chance of uncontrolled clashes.

Ethically and philosophically, the central question is not whether machines can be better shooters. It is whether we are willing to allow machines to make consequential moral distinctions under pressure without human stewardship. The practical answer that militaries have offered so far is mixed. Doctrines describe human supervision and legal reviews. Operational practice, as observed in recent conflicts, increasingly relies on automation to survive against massed salvos. That tension will not be resolved purely by technology. It is a political and moral question about the nature of responsibility under fire.

What should states and technologists do now? Three modest prescriptions:

  • Design for graceful degradation. Autonomous systems should be engineered to fail to a safe state, to admit uncertainty, and to require human confirmation outside a narrow, well-tested set of emergency contingencies.

  • Build interoperable, auditable decision trails. If robots engage robots we must still be able to reconstruct why a machine chose a target. This is a requirement for accountability and for learning.

  • Invest in counter-autonomy as much as in autonomy. If swarms are the offense, resilient, scalable countermeasures are the defense. That includes electronic warfare, decoys, and doctrinal changes that accept that contested sensing is the norm.

In sum, a future of robot-versus-robot engagements is not inevitable but it is plausible and accelerating. We cannot delegate the ethical and political choices to engineers or to battlefield expediency. If we wish to shape that future, we must do so now—through doctrine, law, and engineering choices that accept both the power and the fallibility of machines in war. Otherwise the machines will inherit the battlefield by default and the moral work of restraint will remain undone.