The old Christmas story is one of fragile ceasefires and sudden compassion. In 1914 men on opposite sides laid down rifles and sang across trenches. Today the actors at war may include not only human beings but networks of sensors, algorithms, and weapons that can be left to act at machine speed. That change forces us to ask a simple but urgent question. Is peace more or less likely on an earth where violence can be delegated to machines?

There is a seductive logic to the hope that robots will make warfare cleaner. Machines do not thirst for revenge. They do not tire, misread facial expressions, or panic under artillery fire. If we could program perfect discrimination, strict proportionality, and unfailing restraint into autonomous systems, machines might reduce some kinds of collateral harm. That is an attractive fantasy. It is also a dangerous one, because it confuses technical capability with moral agency.

From a technical and operational standpoint the reality in 2025 is sobering. State and non-state actors are increasingly fielding massed and semi-autonomous aerial systems that complicate the traditional calculus of escalation and control. Adversaries now deploy drone raids in numbers designed to saturate defenses and to impose attrition at low cost. These developments have been documented in multiple conflict zones where drone swarms and prolific strike campaigns have become a tactical staple. The rapid normalization of these tactics materially shortens decision timelines and raises the probability of error, misattribution, and unintended escalation.

Those operational facts matter for law and policy. Humanitarian and rights organisations, along with neutral legal authorities, argue that machines that can select and attack people without human judgment are a clear humanitarian risk. The International Committee of the Red Cross has urged states to preserve human control over the use of force and to negotiate binding rules to constrain systems that cannot be made sufficiently predictable or accountable. The ICRC frames this as not merely legal prudence but a moral imperative.

Civil society campaigns and human rights research echo the ICRC’s alarm. Groups that have campaigned against autonomous lethal systems press for prohibitions on systems that target people and for strict requirements that any permitted autonomous function remain under meaningful human supervision. Human Rights Watch and allied organisations have articulated how digital decision making dehumanises targets and risks discrimination and rights violations if deployment proceeds without strong legal guardrails.

At the multilateral level there has been movement, but not resolution. Within the Convention on Certain Conventional Weapons the Group of Governmental Experts continues to work on a rolling text that might form the basis for a legally binding instrument. The effort in 2025 represents the clearest diplomatic attempt yet to pin down obligations on autonomous systems. Yet negotiations are painfully slow relative to the pace of technological diffusion. Policy windows close quickly when cheap, effective military options exist.

So what does all of this mean for the possibility of peace? First, robotic systems cannot create peace by themselves because peace is a political achievement not merely a technical condition. Machines can be designed to reduce some risks but they also enable new forms of coercion. Cheap autonomous systems lower the cost of persistent pressure campaigns. They allow actors to punish with deniability and to test redlines more often and in finer gradations. The net effect may be a world with more constant, lower level violence rather than fewer wars.

Second, delegation of lethal authority to algorithms disperses responsibility. When a bomb is dropped, a human chain of command is readily visible. When an algorithm pings on a signature and executes, the moral and legal lines blur. Accountability devolves into forensic afterthoughts. If we celebrate robotic peace without clarity on attribution and responsibility we risk creating a moral vacuum where victims cannot obtain redress and managers cannot be deterred. The ICRC and multiple civil society actors insist that meaningful human control is not a rhetorical flourish but a necessary constraint for both lawfulness and accountability.

Third, the social fabric that permits peacemaking must remain human. Ceasefires, confidence building, and reconciliation rely on empathy, narratives, and institutions. Machines can help monitor compliance and reduce human exposure to danger. They can provide better battlefield transparency when designed for that purpose. Yet they cannot adjudicate justice, forgive, or create the political bargains that end wars. The Christmas truce occurred because men recognized shared humanity in a moment of exhaustion and sorrow. Algorithms have no access to that interior life.

If we are to approach the Christmas wish of peace in a truly robotic age, policy must not lurch between technophilia and resignation. We need three concurrent efforts. First, states must converge on clear legal limits that prohibit systems that cannot reliably respect distinction and proportionality and that forbid autonomous targeting of persons. Second, we must build international norms and technical standards for auditable human oversight, fail safe mechanisms, and transparent testing regimes. Third, because proliferation matters, export controls and supply chain governance must be treated as central to arms control in the age of autonomy.

These are not fanciful prescriptions. They are the practical translation of the ICRC’s plea and of multilateral work in Geneva. They are also what civil society groups are pushing for in public fora where the technology is discussed not as a wonder but as a weapon of real consequence.

To wish for peace on a robotic earth is not to reject technology. It is to insist that human beings design and constrain technology with the ends of justice and human flourishing in mind. Machines can reduce risk, they can carry burdens too terrible for humans, and they can extend our capacity for rescue and relief during crises. But if we confuse automation for moral maturity we will find ourselves at a perpetual military Christmas Eve, where the songs of peace are drowned by the hum of engines and the calculus of attrition.

On this Christmas it is worth remembering two truths. First, moral responsibility does not scale with processing power. Second, political will does. If we truly want peace in an era where robots serve and fight, we must legislate limits, demand accountability, and cultivate the human capacities that alone can transform conflict into reconciliation. Until then we will have machines that fight for us but not the human will to stop fighting.