A smouldering tank farm in Rostov is a blunt reminder that modern war is becoming both more mechanical and more moral. Video and official accounts from 2024 and 2025 show Ukrainian unmanned systems striking fuel and military storage sites deep inside Russian territory, producing large fires and secondary detonations that have been widely reported. These actions have been attributed to long range strikes and coordinated drone operations that overwhelmed air defences before delivering their effects.
Concurrently, a parallel technological threshold has been crossed. Ukraine has moved from improvisational drone tactics to deploying AI-augmented guidance, perception, and mission management modules that materially improve hit rates and permit more complex behaviours in contested electronic environments. Independent analyses and field reporting document a rapid uptake of these AI-enabled capabilities, particularly for final approach navigation and automated target recognition that assist human operators in completing engagements under jamming and fog of war.
The language of the battlefield has thus shifted from drones that are flown to targets to systems that can be described as hunters. Ukrainian developers and units have showcased interceptor and counter-kamikaze platforms whose role is to seek, prioritise, and engage hostile aerial threats. Some of these hunter designs pair human supervision with onboard AI that autonomously tracks and stabilises a weapon system once the operator authorises engagement. The practical benefit is manifest: fewer operators, higher sortie success in cluttered environments, and increased resilience to degraded links. Yet the rhetorical ease with which these platforms are named and praised masks profound moral ambiguities.
At the centre of those ambiguities sits the concept of target-locked autonomy. In a human-in-the-loop model the operator accepts responsibility for lethality by making an affirmative kill decision. In practice, modern modular systems often shift responsibility away from a single discrete act of authorization and into a distributed sequence of machine-assisted steps. A human may designate a target zone, an algorithm prunes candidate signatures, and the terminal guidance module executes final steering to a lock. When the system’s contribution is substantial, who is morally accountable for misidentification, disproportionate damage, or errors in classification? International law expects a human to exercise judgement on distinction and proportionality. Autonomous target-locking frays that expectation by interposing opaque models between perception and action.
Technical reality compounds the ethical worry. Machine vision and sensor fusion perform poorly at the margins that matter most. Adversarial conditions such as smoke, camouflage, reflective surfaces, or deliberate sensor deception can provoke confident but false classifications. Electronic warfare can create graceful degradation in communications while leaving an onboard classifier complacently certain. The result is a system that can produce the reflexive appearance of precision even while its underlying epistemic basis is fragile. The Rostov incidents highlight this tension. Tactical success in igniting fuel tanks does not erase the strategic risk that misapplied autonomy could strike the wrong infrastructure, or catalyse escalation through unintended consequences.
There is also a political economy to consider. The demonstrated effectiveness of AI-augmented munitions and hunters lowers the barrier to deeper strikes and to operations that reach across borders. Once one belligerent normalises target-locked autonomy in offensive roles, adversaries will feel pressured to reciprocate or to invest in countermeasures that further degrade trust in system behaviour. Arms racing in autonomy is not a theoretical fear. Policy analysts have warned that the spread of these capabilities increases the likelihood of mistakes that could cascade into broader confrontation, and that existing multilateral forums are underprepared to adjudicate distributed responsibility for algorithmic decisions in war.
What then are the ethical obligations for states and technologists who accept or accelerate these systems onto the battlefield? First, they must preserve meaningful human judgement at the points where law and values demand it. That means clear and verifiable doctrines about what constitutes authorization and when machine autonomy is purely permissive rather than imperative. Second, developers should prioritise explainability, auditability, and robust fail safe modes that default to non-lethal outcomes under uncertainty. Third, transparency between states and within alliances will be essential to reduce worst case misreadings and to design reciprocal confidence building measures. Last, legal and ethical review processes ought to be continuous not episodic, because firmware updates can change system behaviour overnight.
The spectacle of burning tanks and blown warehouses seizes public attention. Yet the more consequential work happens in design rooms, classification datasets, and command doctrine meetings where the invisible decisions about autonomy are codified. If the Rostov blazes mark a new chapter in the robotisation of war, they should also mark a renewed determination by publics and professionals to insist that machines do not become the proximate authors of death without a comprehensible chain of human responsibility. Ethics cannot be an afterthought that is retrofitted to victory. It must be embedded into the architectures that now decide, with increasing autonomy, whether and where to hit.
Prof. Adrian Locke roboticwarfareblog.com