The holidays invite a different tempo. Amid artificial lights and rituals that insist on human presence, we have a rare permission to step back from tasks, orders, and objectives and to ask deeper moral questions about the tools we build. For those of us who think and write about robotics and autonomy in warfare, the question is both intimate and public: what does it mean to celebrate our humanity while pressing machines ever closer to life and death decisions?

The international conversation around lethal autonomous weapon systems has not paused for the season. In 2025 states continued formal negotiations under the Convention on Certain Conventional Weapons as a Group of Governmental Experts worked to articulate elements of an instrument to address these emerging technologies. These sessions reflect sincere attempts to translate abstract worry into concrete obligations.

At the same time, states and coalitions have pursued softer instruments intended to set norms without binding law. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, launched in recent years and endorsed by an expanding list of states, exemplifies this approach. Its pragmatic language emphasizes rigorous testing, adherence to international law, and constraints on certain high risk applications. Yet the declaration remains nonbinding. The gap between aspirational language and enforceable limits shapes the ethical landscape we inhabit.

Journalists and observers have repeatedly warned that regulatory efforts lag behind technological diffusion. Autonomous and AI enabled systems are increasingly present on modern battlefields and in theatres of conflict, producing pressure for rapid operational adoption that outpaces careful governance. Calls from international officials for clearer rules reflect a growing anxiety that technological momentum will set norms by default rather than by design.

If there is a moral lodestar, it is the insistence that human judgment remain central when force is applied against persons. Humanitarian organizations and legal scholars have argued strongly for limits on autonomy, including proposals to prohibit systems that are unpredictable in effect or that directly target people. Those arguments are rooted in international humanitarian law and in a broader ethical intuition: accountability requires that a human can understand, foresee, and be held responsible for lethal outcomes. The ICRC has framed these recommendations with clarity, urging legally binding rules to preserve human control and to mitigate risks to civilians and combatants.

Practical military institutions have responded by codifying principles and by attempting to operationalize them. The United States Department of Defense adopted five ethical principles for AI that emphasize responsibility, equity, traceability, reliability, and governability. These principles were meant to guide design and deployment across the lifecycle of systems and to reassure publics and allies that ethical constraints would inform force modernization. Implementing such principles inside complex acquisition and combat systems is, however, an engineering and institutional challenge of the first order.

The holiday season sharpens a contrast that is easy to forget in technical briefings. Rituals connect us to narrative lines of duty and care that do not translate neatly into software requirements. Machines can accelerate sensing and reduce risk to operators. They can also obscure agency and diffuse responsibility. When an algorithm recommends a strike, who bears moral pride in its success and who bears blame for its failure? The law of armed conflict offers partial answers. Ethical imagination must supply the rest.

My modest prescription for the next year starts with three commitments that are practicable yet principled. First, preserve meaningful human judgment at the point of lethal decision. Meaningful human judgment is not a slogan. It requires time, information, training, and a chain of responsibility that can be audited. Second, operationalize transparency and verification across the system lifecycle. Independent testing, red teaming, and public reporting where possible will reduce risks that systems behave in ways their designers did not intend. Third, pursue layered governance: national implementation of principles, reciprocal inspection and verification practices among allies, and continued work toward international instruments that can bind states where norms alone fail.

These are not sentimental resolutions. They are engineering requirements and policy choices. They will add cost and slow deployment. They will frustrate commanders and technologists who crave speed. That friction is the point. Moral deliberation is the affordable friction that prevents catastrophic error.

There is also, finally, a civic responsibility for technologists and scholars. The holiday invitation to reflect should be treated as a professional duty. Write clear documentation. Insist on robust testing. Speak up when a capability is scaled without the evidentiary base to justify it. The ethical life of a society is supported by small acts of integrity as much as by treaties and summits.

If this feels like an austere holiday message, it is because the technology is austere. The year ahead will not deliver simple solutions. But these weeks offer a discipline of attention. Let us bring that attention to the debates, committees, laboratories, and factories where the future of conflict will be shaped. Doing so honors the fragile moral commitments that the holidays remind us to keep.