Autonomous systems are now a living presence at sea. From experimental unmanned surface vessels designed to trail submarines to commercial projects exploring remotely supervised cargo ships, autonomy is shifting maritime operations away from an era in which accountability maps neatly onto a human body aboard a bridge. The legal and moral architectures that supported that older map strain when confronted with systems that can perceive, decide, and act with varying degrees of human supervision. This essay examines where responsibility currently sits, the gaps that maritime autonomy reveals, and pragmatic steps that could restore credible accountability without falling back on naïve technophobia.

First, some existing anchor points. States and militaries have not ignored the problem. The United States Department of Defense updated its policy on autonomy in weapon systems in 2023, reaffirming that weapon systems incorporating autonomous functions should be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. The update also emphasizes testing, review, and management oversight before fielding such systems.

On the humanitarian law front, international bodies and non governmental organizations argue that autonomy cannot relieve humans and states from core obligations. The International Committee of the Red Cross has urged states to adopt legally binding rules to ensure sufficient human control and to prohibit certain unpredictable autonomous weapons, arguing that human judgment must remain central to targeting decisions.

Maritime law and regulation are already wrestling with noncombat autonomy. The International Maritime Organization completed a regulatory scoping exercise for Maritime Autonomous Surface Ships in 2021. That exercise identified fundamental questions about how to interpret concepts such as master, crew, and responsible person when ships are remotely operated or fully autonomous. It also highlighted gaps in instruments ranging from collision rules to search and rescue and seafarer certification.

These three facts, taken together, define the problem space for accountability at sea. Naval planners and technologists will point to prototypes such as DARPA’s Sea Hunter to show that autonomy can be engineered to obey navigation rules and perform complex tasks under sparse supervision. Sea Hunter’s trials demonstrated autonomy suites and remote supervisory control that, in controlled conditions, complied with collision regulations and executed waypoint missions without continuous human steering. Such demonstrations are important. They also obscure what responsibility looks like when the stakes are kinetic or when an autonomous vessel encounters ambiguous, contested, or degraded environments.

Where does law attribute responsibility? Two complementary strands matter. For state conduct in hostilities, international humanitarian law and general rules on state responsibility mean that a State cannot evade liability for the wrongful acts of its agents by claiming a machine decided things on its own. Criminal liability for individuals, such as commanders or operators, depends on identifying culpable mental states or negligence. Civil and administrative liability depends on domestic rules but typically requires causal links between an actor’s conduct and harm suffered. Autonomous systems complicate every link in these chains. They distribute decision making across designers, data providers, operators, commanders, and the machine itself. The result is a multi‑node causal web that resists simple attributions of guilt or liability.

Two recurring fault lines follow from that complexity. The first is causal opacity. Modern perception stacks, machine learning models, sensor fusion algorithms, and behavior planners can be brittle, nonintuitive, and poorly documented. When an autonomous surface vessel misidentifies a target or fails to take evasive action, investigators need auditable records. Without robust, immutable system logs and forensic tools that reveal sensor inputs, internal state, and operator commands, it will be practically impossible to say whether the fault lay in sensor degradation, adversary spoofing, a flawed objective function, negligent supervision, or an unforeseeable emergent behavior.

The second fault line is normative delegation. States and services must ask what kinds of lethal or damaging decisions they are prepared to permit an algorithm to make. The concept of meaningful human control emerged in multilateral discussions precisely because delegating the decision to kill or to inflict damage to an opaque process undermines legal and ethical accountability. Meaningful human control is not an abstract slogan. It requires concrete structures: constraints on deployment contexts, thresholds of human review before lethal action, and operational modes that preserve human judgment in the planning and execution loop. International debates in fora such as the Convention on Certain Conventional Weapons reflect the same tension between technical capability and legal consent.

Maritime engagements add distinct complications. The ocean is a crowded domain with civilians, neutral commercial traffic, and strict rules on collision and safety. An autonomous warship or armed USV operating in such an environment must simultaneously respect the law of naval armed conflict, coastal state rights under the Law of the Sea, and peacetime maritime regulations. The IMO’s regulatory scoping exercise thus matters: it shows that regulatory instruments conceived for human crews can produce ambiguities when applied to machines. Who is the master if the “bridge” is a shore control center? Who signs the safety certificate? How does search and rescue work if no human is aboard to render assistance? These are not hypothetical liberal concerns. They are practical obstacles to credible accountability at sea.

So what practical reforms would improve accountability without arresting innovation? I propose five interlocking measures.

1) Mandatory predeployment legal reviews and technical audits. Article 36 reviews under Additional Protocol I obligate states to ensure that new weapons comply with international law. That obligation should be operationalized for maritime autonomous systems through documented legal reviews tied to technical audit trails. Audits must include red team testing for spoofing, failure mode analyses, and realistic environmental testing. The ICRC’s guidance and existing Article 36 jurisprudence provide a procedural template.

2) Forensic logging and data provenance requirements. Systems must record sensor streams, classifier confidences, mission objectives, operator inputs, and communications in tamper resistant form. Those logs should be standardized so that investigators, prosecutors, and regulators can reconstruct events and assign responsibility. Technical standards for log integrity should be part of certification before fielding.

3) Defined modes of human control tied to mission profiles. Not all autonomy is the same. States should codify operational boundaries in which autonomy can act without immediate human intervention, and expressly reserve lethal targeting decisions to modes that require preauthorization and human confirmation. Military doctrine and rules of engagement must incorporate these distinctions and train officers accordingly.

4) Liability clarity for manufacturers and integrators. Where negligence in design, testing, or supply chain security causes harm, contract and tort law should create predictable avenues for compensation and deterrence. States should not use procurement contracts to immunize suppliers from liability in cases of obvious negligence. Clear domestic regimes reduce the perverse incentive to offload risk onto victims.

5) International cooperation on norms and a maritime annex for autonomous weaponry. The IMO’s work on MASS shows that maritime regulation can adapt when states cooperate. A similar international process, grounded in existing humanitarian law and maritime safety law, should produce a maritime‑specific annex addressing armed autonomous surface vessels. That annex should articulate minimum human control thresholds, harmonized standards for logkeeping and certification, and mechanisms for incident investigation and state‑to‑state remedies.

There are, of course, unsettled questions. How do we balance the operational advantages of autonomy, such as persistence and reduced personnel risk, with the moral cost of eroding human responsibility? How should accountability be apportioned between a fleet commander who authorizes a mission, an operator who supervises multiple assets, and a developer whose training data encoded biases? The temptation will be to write long lists of required mitigations and to assume that checklist compliance equals justice. It will not. Accountability requires institutions that can adjudicate complex causal chains, transparent technology that supports those institutions, and political will to accept the consequences of delegation when things go wrong.

Finally, let us be candid. Machines will not be moral agents in any useful sense. They will remain instruments of human will. How we design law and policy around them will determine whether they become a means to diffuse responsibility or a set of technically auditable tools that allow us to discharge responsibility more reliably than before. If we want autonomous ships and maritime robots that strengthen rather than weaken accountability, we must insist on auditable systems, meaningful human control where life and liberty are at stake, and harmonized international rules that reflect the ocean’s peculiarities. Those steps are not merely bureaucratic. They are the moral infrastructure that must accompany any delegation of violence to silicon and steel.