The last two years of the Ukraine war have accelerated a tactical experiment that previously lived in lab demonstrations and wargame post‑mortems. Small, inexpensive aircraft flown by humans were once the dominant tool for short range reconnaissance and attack. Today we are seeing the rapid emergence of what I call chase drones: purpose built interceptors that use on‑board artificial intelligence to locate, pursue, and defeat hostile UAVs with minimal human guidance. This is not science fiction. It is a pragmatic response to an air threat environment characterized by massed loitering munitions and continuous electronic warfare.
Technically speaking, a chase drone is an autonomous or semi-autonomous interceptor optimized for visual or electro-optical terminal guidance rather than long range strike. The architecture typically pairs a compact compute node, AI vision models for target recognition and tracking, and resilient navigation methods that work when GPS or datalinks are unreliable. The compute node performs target acquisition and short‑term trajectory prediction, while the platform provides the speed and agility to close with targets that are usually slower but difficult to track through traditional radars. In practice this means converting commodity frames into networked, vision-driven interceptors or fielding bespoke kinetic interceptors with reusable airframes.
At scale, the most consequential hardware development has been the proliferation of compact AI “strike kits” and node modules that retrofit existing drones with onboard autonomy. Industry and reporting indicate that tens of thousands of these modules have been slated for Ukraine under Western programs, a move intended to make large numbers of combat drones harder to defeat through jamming and to enable simple swarm tactics and autonomous pursuit inside the engagement envelope. That industrial trick matters: autonomy is expensive when bespoke, but affordable when provided as a drop‑in module that multiplies existing airframes.
Concurrently, Ukraine’s improvised and commercial drone ecosystem produced low cost interceptors and tactics that reshaped battlefield expectations. Volunteer groups and small manufacturers pushed out hundreds of low price interceptors and FPV hunter teams that proved the concept in combat. Those systems are not glamorous but they have a strategic effect: they force an adversary to expend more complex and costly weapons to achieve the same effect. They also create an incentive to invest in reusable, AI‑guided kinetic interceptors that can be launched rapidly and operate under electronic attack.
Not all chase drones are small conversions. Several companies in Europe and the United States have pushed designs that are explicitly kinetic interceptors. These are optimized by design to engage other unmanned platforms with high probability of kill while being mass producible and relatively low cost. The market reaction and the fielding of prototypes show a clear trend: defence planners want a tier of low cost hard kill options to sit below missiles and above machine guns in the layered air defence problem set.
Operational experience, however, has highlighted grounded limits. Electronic warfare remains the primary confounding variable. Visual autonomy is attractive because, in theory, it can continue in GPS and datalink denied environments. In practice the cameras and computer vision stacks face problems of occlusion, countermeasures such as strobes or reflective coatings, and the simple facts of energy and kinematics: batteries limit loiter time and pursuit speed, and an interceptor optimised for tracking a slow loiterer struggles with fast, manoeuvring targets. Intelligence and open reporting also show that systems which claim immunity to jamming are instead simply more resilient or able to fall back to on‑board sensors; they are not invulnerable. These technical caveats are important because they delimit where chase drones are decisive and where they are merely useful.
Beyond engineering, there is policy and ethics. Embedding autonomy into weapons that decide to close on and strike another aerial vehicle raises questions of attribution, command, and discrimination. Most fielded chase drones in Ukraine have retained human oversight in the critical engagement loop, and developers frequently emphasise that decision chains remain under operator control. Yet the push to scale and to accelerate kill chains increases pressure to raise autonomy thresholds. Thinkers outside the lab and the factory must insist that any delegation of lethal functions be matched with paperwork, clear rules of engagement, and a robust after action record that can be audited. The risk is not hypothetical. When automation reduces operator workload and latency it also diffuses responsibility. The Ukrainian experience is an early real world test of whether militaries and industry can operationalize autonomy without abandoning accountability.
Strategically, chase drones change cost calculus. If a relatively cheap interceptor can reliably defeat a more expensive loitering munition, then attrition economics favor the defender. That dynamic helps explain Western funding choices and manufacturing priorities for Ukraine: deliver autonomy at scale, proliferate interceptors, and blunt the strategic effect of massed strikes. But this is not a permanent advantage. Adversaries will adapt by increasing swarming density, raising cruise altitudes, or adding decoys and counter‑AI measures. The result will likely be a multi‑year cycle of measure and countermeasure where autonomy amplifies both sides’ incentives to innovate.
What then should we conclude? First, chase drones are a real and consequential addition to modern air defence, particularly for contested theaters where conventional assets are scarce. Second, their effectiveness is tightly coupled to software quality, sensor fusion, and tactical doctrine rather than to a single miraculous algorithm. Third, the social and legal frameworks that govern the use of autonomous interceptors must evolve at least as quickly as the technology. Otherwise we will have scaled a capability without the governance to ensure it is used in a way that is consistent with law and moral responsibility.
Ukraine’s ongoing experiment with AI chase drones therefore matters beyond the immediate theatre. It is teaching the world about rapid fielding, human‑machine boundary design, and the political economy of low cost autonomy. Observers should watch not only the kill rates and footage, but also the contracting patterns, the software provenance, and the public debate about control. Those are the fault lines where the next chapter of automated warfare will be written.