Science fiction has long trafficked in swarms: clouds of insects, locusts of machines, orchestras of tiny agents that overwhelm by numbers and improvisation. Over the last decade the gap between that fiction and military engineering has narrowed. The United States Department of Defense and its research partners have moved beyond proof of concept and into repeated experimentation. The question is not whether swarms are technically interesting. The question is whether the behaviors and capabilities we see in laboratories and exercises are approaching the threshold at which they become doctrine, codified and operationalized in the ways commanders train, plan, and fight.

A short chronology helps to set the terms of debate. The Strategic Capabilities Office and Naval Air Systems Command demonstrated the Perdix microdrone experiment in 2016, releasing over a hundred small vehicles from fighter aircraft and showing basic swarm behaviors like collective decision making, adaptive formation flying, and self-healing. DARPA followed with multiple programs that explicitly targeted massed autonomy, from Gremlins, which explored volley employment from airborne motherships, to OFFSET, a multi‑sprint effort focused on urban swarm tactics, human‑swarm teaming, and habitability of swarm software in messy environments. More recently U.S. services have moved swarm concepts into joint experimentation. Exercises such as Edge 22 and Project Convergence 2022 included demonstrations and plans for soldier‑controlled swarms, offensive swarm prototypes, and cross‑domain integration across air, ground, and maritime platforms.

These activities are evidence of serious interest, not inevitability. There are three reasons why that distinction matters for doctrine.

First, doctrine is not a catalogue of technologies. Doctrine is the set of guiding principles and established ways of doing things that inform training, force structure, procurement, and legal review. For a capability to become doctrinal it must survive a political and institutional gauntlet. The Department of Defense already treats autonomy in weapons systems under a formal policy regime. DOD Directive 3000.09, first issued in 2012, sets thresholds for review when a system can select and engage targets without human intervention. That framework does not categorically ban autonomy, but it does require a senior review and a substantial evidentiary bar for systems that approach fully autonomous targeting. In parallel, policy and advisory bodies inside and outside government continue to press for clearer definitions of what counts as AI enabled and what processes must govern software updates and model retraining. Those debates are not academic hair splitting. They shape whether a swarm concept can be fielded at scale and under what legal and ethical guardrails.

Second, operational utility must exceed the costs and risks. Swarms promise several attractive military effects. A mass of small, inexpensive platforms can provide resilient sensing, saturate enemy defenses, enable distributed electronic attack, and achieve attrition‑tolerant effects that are costly to replicate with large assets. They can act as a mobile sensor network that pushes detection and decision to the tactical edge. At the same time swarms introduce very real challenges: command and control in contested electromagnetic environments, rules for target discrimination and escalation, logistics for deploying and recovering numbers of small systems, and the vulnerabilities that come from software bugs or emergent behavior in complex collective algorithms. Exercises in urban environments and multi‑service testbeds show progress, but they also expose fragilities. A doctrine predicated on swarms must answer how commanders will maintain lawful, proportionate control of lethal effects when autonomy, communications degradation, and rapid tempo converge.

Third, adoption depends on adversaries and countermeasures. Doctrine is not written in a vacuum. The proliferation of small drones, loitering munitions, and modular payloads means that many potential foes can attempt swarm‑like employment. That reality creates both an incentive and a constraint for U.S. forces. On one hand, adopting swarm tactics can restore asymmetries of scale and reduce risk to high‑value platforms. On the other hand, if adversaries field low‑cost counter‑swarm tools, such as electronic attack, directed energy, or massed kinetic shooters, the return on investment for swarm deployments changes. The services are already hedging by investing simultaneously in counter‑swarms and layered defenses even as they test offensive swarm prototypes.

If swarms are to graduate from experiment to doctrine, four institutional conditions must be satisfied.

1) Conceptual clarity. The department must settle practical definitions of swarm, autonomy, and AI enabled functionality for doctrinal use. This includes numeric thresholds, the role of centralized versus distributed command, and precise descriptions of human supervisory control. Vague metaphors of insect or wolf behavior are useful for imagination. They are poor substitutes for the definitional precision doctrine demands.

2) Legal and ethical frameworks integrated into procurement. The DOD review processes for autonomous weapons must be operationalized so that software updates, retraining, and emergent behaviors do not repeatedly trigger ad hoc halts to deployment. That will require a regime of continuous testing, transparent red teams, and clear chains of accountability for outcomes arising from collective decision logic.

3) Robust, resilient C2 and communications architectures. Doctrine must assume contested spectra, denied GPS, and intermittent bandwidth. Swarms that rely on unprotected comms will remain R and D curiosities. Doctrine will favor implementations that degrade gracefully, retain meaningful human supervision when lethal effects are possible, and integrate gracefully into joint kill chains such as the emerging all domain command and control concepts.

4) Training, logistics, and cost models appropriate to massed systems. Doctrinal integration means writing new tactics, techniques, and procedures and redesigning sustainment models to handle disposable or semi‑reusable fleets. It means rethinking formations where manned assets are the nodes of a networked system composed largely of machines.

Philosophically, we must confront what a doctrinal shift toward swarms means for responsibility and moral imagination. A doctrine that privileges remote massed autonomy risks amplifying moral distancing. Machines may be better at sensing or engraving lethal effects with precision, but they cannot assume moral responsibility. That legal and ethical responsibility attaches to human commanders and to institutions. If doctrine obscures that chain through complexity or by outsourcing judgment to inscrutable algorithms then doctrine will have failed a fundamental test of legitimacy.

So where are we in mid 2023? The answer is cautious momentum. Demonstrations and exercises show capability maturation. Programs such as DARPA OFFSET and the Perdix experiments demonstrate that collective behaviors are achievable at scale. Service experiments during Project Convergence and Edge 22 show interest in translating those behaviors into operational tactics. At the same time policy review, legal scrutiny, and the practical headaches of communications and logistics mean that doctrinal codification is neither trivial nor imminent.

My prognosis is twofold. Near term, swarms will proliferate as a set of tactical options in niche roles: resilient sensing, area denial, collaborative electronic warfare, and low‑cost attritable effects that augment, but do not replace, human‑involved decision chains. Medium term, if the department builds the institutional scaffolding described above, swarm principles will be folded into doctrine as a mode of employment rather than as a standalone panacea. Doctrine will accept swarms where the operational environment and legal constraints align and will reserve strict human oversight where the stakes are highest.

Doctrinal change is ultimately conservative by design. Militaries write doctrine the way they pay for logistics: with an eye for predictable outcomes and accountable risks. Swarm technologies tempt us with radical new ways to manage volume, risk, and attention. They also tempt us to outsource judgment in ways that military institutions and democratic societies should resist. The path from laboratory spectacle to doctrine is open. It will be narrow, contested, and morally freighted. If we want swarms in our doctrine, we must do more than perfect algorithms. We must build clarity, law, and institutional practices that make their use both effective and responsible.