The holidays bend time in a peculiar way. Familiar rituals compress months into a handful of evenings and the mind, temporarily freed from day to day urgencies, returns to questions that refuse tidy answers. For those of us who study and build autonomous systems, the season invites a different kind of ledger. Not one of financial accounts or project milestones, but an ethical accounting: what have we authorized machines to do in our name, and what burdens have we surrendered while promising to reduce human risk?

The technical temptation is obvious. Autonomy promises endurance, speed and the ability to operate in environments where humans cannot safely persist. Yet the moral temptation is subtler. Give a machine the power to apply force and we also give it a slice of moral work that previously required human judgment. That transfer looks efficient on a slide deck but morally precarious when we recall why human judgment matters: it is embedded in capacities for empathy, narrative context and responsibility. Many contemporary policy bodies have thus urged restraint and clearer rules for devices that might otherwise make life and death decisions without meaningful human control. The International Committee of the Red Cross, for example, has recommended new legally binding rules and argued against systems whose effects cannot be sufficiently understood, predicted or explained.

This is not merely an abstract quarrel. The United Nations has asked states and civil society to submit views on how to address the challenges raised by lethal autonomous weapon systems, a procedural step that reflects widespread unease about delegating force to machines. That request emerged from a General Assembly process aimed at assembling a broad set of perspectives on the legal, humanitarian and ethical dimensions of these technologies.

Grassroots and NGO movements have responded in turn. Campaigns urging prohibition or strict limits have gathered diplomatic support and public attention, reflecting a belief that certain delegations of violence to algorithms are intolerable. In late 2024 a broad coalition recorded its influence in voting patterns at the UN and in regional pronouncements that advocated prohibitions or rapid regulatory action.

Practitioners and policy actors are not idle either. Exercises that stage realistic scenarios help to surface operational constraints and legal tensions that abstract debates can obscure. Initiatives that bring together lawyers, ethicists, technologists and military planners are valuable precisely because they force tradeoffs into the open rather than allowing them to hide behind technical jargon. One such exercise convened stakeholders to probe limits and requirements across plausible use cases and to clarify where human judgment must be preserved. These exercises do not yield simple fixes, but they do reveal the contours of what responsible design and deployment might look like.

At a human scale the question is never only about rules. It is about the moral vocabulary we teach engineers, commanders and policymakers. The holiday impulse toward generosity and reflection offers a good test. If we believe that moral responsibility cannot be outsourced, then we must be candid about how autonomy shifts moral burdens. Whose conscience is engaged when a sensor misclassifies a person as a target? Whose name appears when we ask a prosecutor to explain why an algorithm fired? The rhetorical elegance of autonomy risks obscuring moral dilution. Responsibilization, not abdication, ought to be our guiding principle.

We should also remember that the ethics of delegation has long precedents. Technologies from the cannon to the cruise missile reduced the immediacy of human action and thereby altered accountability. The novelty in machine learning systems lies in opacity and unpredictability. Predictable machines create predictable chains of responsibility. Opaque, adaptive systems create moral fog. That fog makes legal compliance alone insufficient as a guarantor of ethical action. Institutional mechanisms are required to restore clarity. These include robust testing regimes, requirements for explainability where possible, operational constraints that limit engagement contexts and clear doctrines that preserve final human judgment for use of lethal force. The literature and international advocacy at present converge on variants of these prescriptions even while they disagree on the extent of permissible autonomy.

Finally, the spirit of the holidays reminds us of vulnerability. Machines are tools intended to reduce human suffering in perilous situations. But tools can also amplify harm if their deployment is unreflective. If our ethical stance toward autonomy is only reactive, then we will perpetually be cleaning up after the next innovation. A more mature posture would combine humility about what current AI can guarantee, institutionalized responsibility for outcomes, and international commitments that limit the riskiest uses of autonomy while permitting rigorous, transparent research into safer modes of assistance.

This season, then, consider two modest commitments. First, normalize moral inquiry within technical teams. Ethics should be a routine checkpoint in development cycles, not a PR afterthought. Second, support multilateral mechanisms that translate ethical principles into enforceable standards. The debate now under way in diplomatic fora and in civil society is not ornamental. It is the social contract being renegotiated for how modern societies authorize force. The holidays are the right time to ask whether we will demand that contract be explicit and binding or allow it to be rewritten in proprietary code.

Machines can help us keep safer watch. They should not replace the human stance that gives meaning to that watch. If we enter the new year with clear eyes about where responsibility must remain human, then the present generation will have honored both technological promise and moral obligation.