Taylor Wessing, as a leading international law firm with an active practice on AI and digital regulation, has produced timely legal commentary that illuminates a paradox at the heart of contemporary governance: civilian AI law is advancing rapidly while military AI remains both politically sensitive and institutionally insulated. Their commentary on the EU Artificial Intelligence Act and on the ethics and regulation of AI in defence makes this tension explicit and useful for practitioners and strategists alike.

Two concrete legal facts structure the problem. First, the EU AI Act, which entered into force in 2024, expressly excludes systems designed exclusively for military, defence or national security purposes from its scope. That exclusion is deliberate and grounded in the Treaties and the view that public international law and national security rules are the appropriate regulatory frameworks for armed forces.

Second, at the same time that the EU has drawn a jurisdictional line, several national defence institutions have moved to embed civilian-style ethical guardrails inside military acquisition and use. The United Kingdom is a salient example. The UK Defence Artificial Intelligence Strategy (2022) and subsequent policy work culminated in JSP 936, Dependable Artificial Intelligence in Defence (Part 1: Directive), which sets out governance, assurance and the principle of meaningful human control as core obligations for defence AI. These documents do not create the same external legal constraints as the AI Act, but they do create an internal regulatory architecture that attempts to borrow from civilian norms.

Taylor Wessing’s reading of the landscape is sober. They note that, although military systems sit outside the AI Act’s formal scope, the Act nevertheless creates a normative benchmark. In dual use contexts where a system has civilian and military applications the Act will bite. Moreover, the Act’s requirements on transparency, documentation and human oversight create expectations that ripple into defence contracting and interoperability. In short, civilian law now exerts a gravitational pull even where it has no direct jurisdiction.

That observation is more than academic hair splitting. In practice it points to three consequential fault lines.

1) The accountability gap between process and outcome. Defence doctrines such as JSP 936 emphasise governance processes, ethical risk assessments and assurance pathways. Processes matter. But process alone cannot answer the final moral question when lethal force is at issue. If an AI-enabled targeting system produces an error that kills civilians, a robust internal assurance record will not by itself resolve questions of legal responsibility, political legitimacy, or public trust. External, independent scrutiny and clearer chains of legal liability are needed if assurances are to be trustworthy rather than merely rhetorical.

2) The dual use problem and regulatory circumvention. Systems developed for benign or commercial purposes may be repurposed for defence uses. Where that happens the AI Act can apply to marketable components and to the companies that supply them, but it will not constrain sovereign decisions about how those components are integrated into military effects. That split jurisdiction can produce regulatory arbitrage and compliance theatre. Taylor Wessing correctly highlights that industry actors and defence lawyers must therefore design compliance programmes that respect both civilian market law and defence obligations, and explicitly account for the point at which an otherwise regulated product becomes an excluded military capability.

3) Interoperability and export controls. As states embed AI ethics into procurement doctrine they still must operate with allies and partners whose legal frameworks differ. The absence of a universal military AI treaty means that interoperability will rest on ad hoc alignment of standards, assurance practices and export controls. That is a fragile architecture when markets are global and supply chains are distributed. Taylor Wessing’s practice point is practical: lawyers must translate diffuse ethical principles into contractual clauses, testing regimes and audit rights that survive classification and commercial pressure.

What should policymakers, generals, and counsel take from this? I offer four propositions that follow from the law firm’s analysis and from the wider doctrinal material.

First, law matters even when it does not bind. The AI Act functions as a benchmark. Defence organisations should treat it as both an opportunity and a constraint. Opportunity because civilian conformity can raise the quality of components and documentation. Constraint because the normative expectations it creates can become political liabilities if defence programmes appear to ignore widely endorsed safeguards.

Second, credibility requires independent assurance. Internal governance frameworks like JSP 936 are necessary but not sufficient. Independent testing, third party auditability and, where appropriate, limited external transparency are essential to bridge the trust deficit between militaries and publics. Without credible external checks, process will be mistaken for prudence.

Third, contracts must be the vehicle of accountability. Where the law is fragmented the private law layer becomes decisive. Defence contracts should embed provenance, explainability thresholds, red-team test results, and clear liabilities for algorithmic failure. Taylor Wessing’s advisory work on AI compliance points to exactly this: lawyers must convert ethical desiderata into enforceable commercial obligations.

Finally, arms control and normative diplomacy remain indispensable. Lawyers and ethicists can improve the architecture inside Ministries of Defence, but only international negotiation can create hard constraints on certain weapon classes and certain delegated lethal decisions. The exclusion of military applications from the AI Act does not absolve states from engaging in treaty-making and normative leadership at forums such as the UN CCW. If the community of democracies wishes to avoid a regulatory race to the bottom, it must use both soft law and hard law in tandem.

Taylor Wessing’s contribution is pragmatic rather than prophetic. They show how a major commercial law practice interprets the shifting regulatory terrain and where legal risk will concentrate for their clients. That vantage point is valuable. It reminds us that regulation is not only a set of prohibitions, but a market signal and a compliance calculus. For scholars and engineers the lesson is normative and tactical. Normatively, we should not outsource moral judgement to code or to compliance checklists. Tactically, if defence actors want robust, lawful and legitimate AI deployment they must treat law and private contracting as instruments of governance and submit them to public scrutiny.

In short, the law will not be the bulwark alone, but neither will technical design. The sensible middle path is a layered regime in which civilian regulation sets the benchmark, defence doctrine institutionalises stringent internal assurance, independent auditors verify claims, and international diplomacy binds states to limits where necessary. Taylor Wessing’s analyses help legal practitioners navigate that layered regime. For everyone else they should be a reminder that technology amplifies our choices rather than erases them. The ethical and legal work of steering that amplification remains primarily political and juridical, not merely technical.