This year’s CyCon, convened under the modestly stark theme “Meeting Reality,” felt less like an academic symposium and more like a collective inventory of lessons learned, near-misses, and stubborn uncertainties about the increasing fusion of cyber and autonomous technologies. The NATO CCDCOE framed the conference as a moment to re-examine assumptions about what technologies actually deliver in conflict, and what they merely promise.
A practical throughline ran across the plenaries and papers: autonomy is no longer hypothetical in conflict environments. From low-cost drone countermeasures in the Ukrainian theatre to machine-assisted situational awareness inside military exercises, speakers treated autonomy as an active variable shaping both tactics and strategy. That was apparent in the proceedings, which collected empirical case studies, wargaming experiments, and technical evaluations rather than abstract manifestos.
Three clusters of insight seemed to dominate the conversation and merit attention by technologists and policymakers alike.
1) Human-centred autonomy is still the hard problem. Multiple authors presented work on how automated tools alter analyst cognition, what metrics matter in operational contexts, and how to preserve human judgment when machine recommendations arrive at machine speed. Of particular note was a study that assessed automated tools in a wargaming environment and argued for quantitative and qualitative requirements to support a human-centred cyber situational awareness. The practical message was clear: autonomy without carefully designed interfaces and trust metrics will produce brittle systems that harm decision quality rather than enhance it.
2) The battlefield is teaching blunt lessons about low-tech vulnerabilities and asymmetric autonomy. Several contributions examined how inexpensive drones and off-the-shelf autonomy are reshaping insurgent and hybrid tactics. Case studies from the Ukraine conflict illustrated how simple capabilities, when widely available, upend assumptions about who can project force and how. Papers that described low-cost drone detection kits, and critiques of small drone employment, remind us that autonomy diffusion complicates attribution and defence across a wide spectrum of actors.
3) Machine learning remains powerful, but fragile in operational settings. Presentations that built on the CCDCOE’s Locked Shields exercise highlighted persistent weaknesses in ML models used to detect command-and-control traffic and other malicious patterns. Authors proposed mitigation techniques but underlined a tougher truth: many existing models fail to generalize under realistic adversarial conditions, which creates a false sense of security if operators treat them as turnkey solutions. This reinforces the previous point about human oversight and robust evaluation.
Policy and legal panels did not lag behind the technical debate. Keynotes and plenaries examined whether current legal frameworks can accommodate decisions that are delegated to software, and whether existing doctrines of responsibility and proportionality make sense when human roles are distributed across networks and autonomous agents. The conference did not settle these debates, but it did move them from abstract admonitions into operationally grounded questions. Namely, what is the minimal, meaningful human control required for lawful action, and how can that be demonstrated in audit trails and design artefacts?
A few tactical and strategic takeaways for practitioners and planners:
-
Design for degraded comms. Several papers and panels stressed that autonomy designed to rely on persistent connectivity will fail when communications are contested. Systems must degrade gracefully, and designers must plan for graceful failure modes rather than optimistic autonomy envelopes.
-
Invest in evaluation regimes that mimic adversarial conditions. The recurring evidence is that models trained on permissive datasets break in the field. Red-team style testing and exercise-derived datasets should be requirements for any mission-critical ML deployment.
-
Keep the human in the loop by design, not by hope. Human involvement should be engineered into the decision architecture, with explicit roles, latency budgets, and measurable trust thresholds. Otherwise autonomy will not extend human capability so much as rearrange failure modes.
Finally, there is a philosophical register that ran beneath many presentations. Autonomy promises to accelerate sensor-to-decision loops, to suss patterns humans miss, and to scale limited staff across many tasks. Yet CyCon 2023 insisted that acceleration is not inherently virtuous. Without auditable reasoning, predictable failure modes, and institutional practices for accountability, the speed that autonomy provides becomes a multiplier of error. This is not merely a technical constraint; it is a moral and strategic boundary. We can make systems that do great damage faster than ever before, or we can build systems that augment human prudence at speed. The choice is an architecture problem and an ethical one.
In short, CyCon 2023 did not give us certainties. It did something more valuable. It forced practitioners, lawyers, and strategists to confront the messy interface where autonomy meets cyber operations, and to recognize that practical design, realistic testing, and legal imagination must advance together. The conference did not soothe anxieties about autonomous weapons or explain away the fragility of ML. Instead it offered a catalogue of concrete experiments, sobering case studies, and policy prompts that should govern the next phase of development. Those who build and field these systems would do well to treat these proceedings as a practical index of where reality already bites.