CyCon has long been the conference where law, policy, operations and technical research meet under one roof. For scholars and practitioners concerned with autonomy in conflict domains, CyCon’s public calls and past programming send a clear message: autonomy will not be treated as a narrow engineering problem but as an interdisciplinary fault line that implicates law, strategy and ethics as much as algorithms and sensors.

When a conference solicits peer reviewed work on AI, autonomous weapons and related topics it shapes the conversation in two ways. First, it elevates technical contributions that expose realistic capability limits and failure modes. Second, it creates a shared vocabulary for legal and policy experts to interrogate those contributions. The EasyChair call for CyCon 2024 explicitly listed AI and related topics in its scope, signalling that autonomy would be debated in both technical and normative tracks.

Technically oriented sessions are likely to foreground autonomy in cyber defence as much as physical systems. Recent preprints and laboratory roadmaps argue the need for autonomous cyber agents to augment overstretched defenders, but they also admit major gaps in training environments, explainability and generalization from simulation to live networks. Those gaps are the practical constraints that should temper any exuberant claims about fully autonomous cyber defenders.

On the legal and policy side, the past two years have produced new political instruments and multilateral fora focused on the responsible military use of AI and autonomy. That momentum will shape CyCon sessions by supplying a normative frame against which technical proposals will be judged. Expect panels that pair technologists with lawyers and diplomats to test whether proposed autonomy architectures can meet the accountability and human control expectations that states and coalitions are beginning to articulate.

There is a distinct risk that conference soundbites will drift into two unhelpful extremes. One is a pragmatic fatalism that accepts autonomy as inevitable and thus sidelines hard questions of governance. The other is utopian prohibitionism that treats every autonomy-enabled tool as a prohibited moral category. Productive CyCon autonomy sessions will avoid both traps by insisting on precise problem statements: what decision the system will make, under what constraints, with what inputs, and how that scales in contested operations. Papers and panels that provide measurable evaluation criteria, reproducible testbeds or transparent failure analyses will be the most valuable contributions.

For attending practitioners and observers I offer three practical heuristics to judge autonomy sessions. One, ask for operational metrics not metaphors. Two, demand evidence of human-in-the-loop design and audit trails that could support legal review. Three, treat scenarios and red team analyses as first class results rather than optional appendices. Conferences influence procurement and doctrine; modest methodological rigor now narrows the space for dangerous overreach later.

Finally, CyCon’s value is pedagogical as much as prescriptive. Autonomy debates at an interdisciplinary conference are where engineers learn the legal constraints that will matter in deployments and where lawyers learn the real technical limits that constrain policy choices. If CyCon’s autonomy sessions deliver interdisciplinarity with technical honesty, they will move the field away from slogans and toward governance architectures that are commensurate with both the capabilities and the risks of autonomous systems.