Autonomy at sea promises to reshape anti-submarine warfare in ways both practical and philosophical. At the level of capability there is a simple argument in favor of unmanned systems: persistent, lower-cost platforms can hold contact, forward-deploy sensors, and buy time for human decision makers. At the level of strategy the argument is more subtle: autonomy redistributes risk away from sailors and toward platforms, and in doing so it reshapes incentives, doctrines, and accountability.
Concrete prototypes have already tested many of these ideas. DARPA’s ACTUV program and its Sea Hunter demonstrator explored an unmanned surface vessel designed to trail and track quiet diesel-electric submarines while complying with maritime rules of the road. The program was transitioned to the Office of Naval Research for further development and for exercise with modular sensor suites and autonomy algorithms.
The Office of Naval Research and program offices across the fleet continue to treat ASW as a systems problem that invites unmanned contributors. ONR frames ASW work as an effort to mature sensors, processors, and tactics that can be deployed on crewed and uncrewed platforms and in the environment itself. These efforts emphasize improved search, detection, localization, and tactical decision aids that combine human operators with automated processing.
Those programmatic successes illustrate the reward side. Unmanned surface and undersea vehicles can be persistently on station, inexpensive to risk, and reconfigurable for different sensors and payloads. Large unmanned undersea vehicles suggest new operational concepts such as long-duration acoustic surveillance, mine emplacement and countermeasures, and distributed sensing so that a local contact does not vanish the moment a manned ship must leave the area. The Navy and the services increasingly plan for mixed manned-unmanned task groups to extend range and volume of coverage.
But technology demonstrations do not erase the many practical risks. First, the physics of detection has not changed. Sound propagation in the ocean is messy, time varying, and highly dependent on bathymetry, temperature, salinity, and shipping noise. A platform without an acoustically quiet signature cannot reliably substitute for the towed arrays and well tuned processing suites present on manned ASW ships and aircraft. Automation can accelerate processing of acoustic returns and reduce operator workload, but the underlying signal to noise constraints remain. Navy solicitations and small business research topics explicitly call for automated sonobuoy and passive acoustic processing to augment human operators rather than to replace them outright.
Second, autonomy compounds risks associated with classification and decision. Machine learning classifiers trained on curated data sets can perform well in test, but the ocean and the adversary will eventually present conditions outside the training distribution. False positives generate expensive reactions, and false negatives can have strategic costs. This is not an abstract concern. The human-in-the-loop architecture matters because an autonomous tail chase that culminates in kinetic action requires precise, auditable decision pathways if escalation is to be controlled.
Third, systems are only as resilient as their communications, cyber defenses, and supply chains. Large unmanned platforms such as extra-large UUV prototypes have encountered industrial and schedule risk during development. The Orca XLUUV effort illustrates how ambitious platforms can be delayed and over budget as programs push into first-of-class engineering and complex integration. Programs of this scale carry acquisition risk that can blunt operational advantage if industrial problems or supply chain shortages delay fielding.
Fourth, the proliferation of unmanned sea craft creates a new threat vector. Naval warfare in 2022 exposed how relatively low-cost unmanned surface vessels and explosive-laden small craft can be used in asymmetric attacks on ports and ships. The October 2022 incidents in the Black Sea demonstrated that sea drones are not only a tool for surveillance but also a weapon that changes how navies posture and protect high value units. The same low-cost autonomy that can help track a diesel submarine can be re-purposed to menace harbors and littoral forces.
Fifth, legal and normative problems remain unsettled. At-sea autonomy must comply with COLREGS and with rules of engagement that are written for human perception and command chains. Sea Hunter’s test campaigns explicitly measured COLREGS compliance as a technical requirement; satisfying the letter of navigation law is necessary but not sufficient for operational acceptability in contested waters. Autonomy that interprets ambiguous sensor data is particularly vulnerable to miscalibration between legal compliance and tactical prudence.
Those challenges suggest a set of engineering and policy responses. On the engineering side we need layered sensing and cross-modal fusion that combine passive acoustics, active acoustics only when doctrine allows, distributed arrays, and correlated surface and airborne sensors. We also need explainable and calibrated AI systems that present uncertainty to operators, not binary outputs. Systems should be designed to fail safe, to hand control back to humans in ambiguous situations, and to provide for graceful degradation under jamming or cyberattack.
On the acquisition and force design side the services should adopt a test-fix-test continuum where prototypes go to sea early and often. The Navy’s mixed portfolio approach is wise: small, affordable units let an operator tolerate losses while larger experimental assets compress development risk through iterative testing. Where programs are pushing first-of-kind performance the program offices must avoid premature production while learning on prototypes.
On the policy side international norms for autonomy at sea would reduce the chance of miscalculation. Establishing shared expectations about identification, tracking, and escalation management matters because unmanned vessels can be misidentified or claimed by third parties. Transparency in capability and intent, plus legal frameworks that bind behavior in peacetime and crisis, would do more to preserve stability than purely technical fixes.
Finally, there is an ethical and strategic consideration that cannot be engineered away. Delegating persistent surveillance and contact management to machines reduces the immediate human cost of ASW but it may lower the political threshold for risky operations. The moral distance between an operator in a distant control center and a submarine’s crew in the ocean is real. We must ask whether cheaper and more persistent tracking will make it easier to normalize long range kinetic options that were previously politically costly. The answer to that question is not in the sensors or the code. It is a societal choice.
Autonomy will be an invaluable tool in anti-submarine warfare when used within doctrines that preserve human judgment, when acquisition treats prototypes as learning instruments, and when legal and technical safeguards constrain escalation. The reward is a distributed, persistent ASW posture that reduces risk to sailors and improves coverage. The risk is a brittle reliance on algorithms and networks that can be spoofed, jammed, or misapplied. Technology alone cannot decide whether such tradeoffs are acceptable. Those decisions belong to publics, to navies, and to the professions that advise them.