Imagine two task groups facing one another on a distant maritime plain. Not a single human sits on the bridge of any of the combatants. Decisions about target recognition, engagement timing, and post-strike assessments unfold in microseconds inside distributed processors. The sea, long a human arena, has been ceded to algorithms. This thought experiment is not merely science fiction. It is an analytically useful lens through which to evaluate where technology, law, and strategy converge and conflict.
Current capabilities point toward plausible pathways but also clear ceilings. The U.S. Navy and allied programs have already operationalized extended autonomy at sea. Long-range, largely autonomous transits and demonstrations of compliance with the International Regulations for Preventing Collisions at Sea have been conducted by Ghost Fleet Overlord test vessels, offering a working foundation for larger unmanned surface fleets. Boeing and the Navy have pushed a parallel envelope beneath the waves with extra-large unmanned undersea vehicles capable of long endurance and modular payloads. At the policy level, naval planners speak openly of a mixed manned-unmanned force architecture within the coming decade, which signals institutional intent to fuse autonomy into force structure.
If we accept those trajectories, what would a fully autonomous naval battle look like tactically? Several characteristics stand out. First, temporal compression. Autonomy removes human reaction-time constraints and enables engagements indexed to machine-speed sensing and mediated decision loops. Second, distributed lethality. Autonomous platforms scale well; more units can be produced and dispersed to complicate an opponent’s targeting. Third, multi-domain coupling. Autonomous surface vessels, undersea vehicles, loitering munitions, and cooperative unmanned aerial systems would form layered systems of sensors and shooters capable of local decision-making and emergent behaviors.
Yet these same features create severe operational brittle points. Robust command and control is the first. Autonomous systems rely on communications, navigation, and shared situational awareness. At sea those channels are contested by environment - propagation variability, multipath, and noise - and by adversary action - jamming, spoofing, and cyber attack. An adversary need not sink an opponent’s drone armada to win; denying it reliable sensors or authoritative identity data may be enough to induce catastrophic errors. History already offers precedent: improvised unmanned surface vessels and explosive boat attacks by non-state actors have shown that small unmanned maritime systems can threaten high-value shipping and create strategic disruption. Those incidents are echoes of a future in which autonomy amplifies reach but not necessarily reliability.
Second, the problem of identification and discrimination remains unsolved at scale. Sensor fusion and machine learning are improving, yet the maritime environment is replete with ambiguity - fishing boats, flotsam, passive acoustic signatures, and dense civilian traffic. Algorithms trained in one theater do not generalize without careful retraining and evaluation. Mistakes may be rare statistically but catastrophic in consequence. International humanitarian law requires distinction and proportionality, principles that are difficult to operationalize inside a brittle autonomy stack. The International Committee of the Red Cross has argued for legal and ethical limits on autonomous weapons, including prohibiting systems that select human targets without meaningful human control. This normative pressure will shape both doctrine and public acceptance.
Third, emergent behaviors and escalation dynamics present a profound strategic hazard. Autonomous systems interacting under imperfect information can generate positive feedback loops. Two fleets of autonomous agents, each optimizing locally for survival and mission success, may converge on engagement patterns neither side intended. The classic example is the danger of uncontrolled escalation from tactical skirmish to wide conflict when automated systems misinterpret maneuvers as attack intent. Once machines start acting at machine speed, human corrective intervention may arrive too late to avert catastrophe.
Fourth, accountability and attribution become murky. Who bears legal and moral responsibility when an autonomous vessel errs - the operator, the commander, the sensor manufacturer, the algorithm designer? The diffusion of responsibility complicates post-incident justice and invites strategic ambiguity. States may prefer plausible deniability; non-state actors will exploit that ambiguity. The law of the sea and arms control frameworks did not anticipate swarms of autonomous platforms making kinetic decisions in milliseconds.
So what are the sensible policy and technical guardrails we should pursue now, while the technology matures? I propose five imperatives.
1) Design for graceful degradation. Autonomy must default to conservative, non-lethal behaviors under uncertainty. That means architectures that fall back to hold-fire states, loiter, or seek human intervention when identification confidence drops below well-tested thresholds. Technical standards for uncertainty quantification need to be public and auditable.
2) Harden C2 and sensor resilience. Investment in redundant and diverse communications, resilient navigation (including anti-spoofing measures), and layered sensor suites will reduce single points of failure. Equally important are robust cyber defenses and operational concepts that assume contested networks.
3) Preserve meaningful human control. Human oversight cannot be a ritual checkbox. It must be architected so that humans retain actionable influence over the use of lethal force, including latency-aware interfaces and delegated authority profiles appropriate to mission context. Such designs will limit speed advantages but preserve moral and legal accountability.
4) Promote norms and verification. A narrow arms control focused on banning particular lethal functions at sea may be desirable but politically difficult. A practical alternative is negotiated norms concerning certain behaviors - for example, bans on autonomous weapon systems that independently target humans, transparency measures for autonomous fleets, and agreed protocols for incidents at sea. Verification regimes must include technical measures - challenge-response identification, authenticated telemetry, and forensic logging standards.
5) Emphasize human-machine teaming research. The most effective and ethically tenable near-term architecture mixes human judgment with machine speed. Research should prioritize interfaces that make machine reasoning explainable, enable rapid human override, and distribute responsibility in tractable ways.
Finally, we must resist two seductive but dangerous narratives. The first is that autonomy will make war less risky and therefore more likely. Lowering the political price of violence by removing human presence on platforms may produce more frequent use of force. The second is technological inevitability. Progress is real, yet choices about deployment, doctrine, and law are political. We can shape the path forward.
In sum, fully autonomous naval battles are conceptually possible and certain technologies already point in that direction. The real question is not whether the hardware can be built but whether societies will accept the legal, ethical, and strategic consequences. Prudence argues for cautious integration, robust international dialogue, and a clear commitment to human responsibility. The sea deserves no less.