The recent series of convenings hosted or amplified by the Institute for Ethics in Artificial Intelligence at TUM has produced a modest but necessary shift in how philosophers, technologists, and policymakers speak about human-robot interaction. These events moved the debate beyond abstract exhortations about trustworthiness and toward concrete questions about rights, responsibility, and the architecture of human-robot teams.
Three themes repeatedly surfaced across panels and workshops. First, human dignity and human rights cannot be an afterthought when robots leave the lab and enter the contexts of care, policing, and the battlespace. Second, ethical norms for human-machine teaming must be specified at the level of system design and operational doctrine, not only as high-level principles. Third, standards and transnational governance deserve center stage because the behavior of machines in one jurisdiction produces effects that ripple across borders and norms. These empirically grounded emphases echoed in forums from Munich to Geneva and in parallel international venues.
The insistence on human rights is not rhetorical. The IEAI convened a summit and subsequent UN side event that explicitly called for an inclusive international standard, even suggesting the idea of a convention addressing AI and human rights. That move reframes technical ethics from a set of voluntary best practices to matters that may require binding, accountable instruments. If a convention is to be meaningful it must link technical specifications, auditing mechanisms, and clear chains of responsibility for outcomes that affect fundamental rights.
From a philosophical perspective we must resist two temptations. The first is moral exceptionalism toward machines, the idea that because an artifact is not conscious it can never figure in moral calculus. The second is anthropomorphic complacency, where the mere presence of a human operator is treated as sufficient proof of moral agency. What matters ethically is the locus of decision-making and the practical chain of control. In many deployed human-robot teams the human role has become supervisory and intermittent. When harms occur, supervisory presence alone should not absolve designers, operators, and commanders of responsibility. This is a conceptual clarification with immediate policy implications.
Practically speaking the IEAI and collaborating workshops emphasized three design imperatives for ethical human-robot teaming. First, predictability: systems must behave in ways that teammates can anticipate under foreseeable conditions. Second, explainability: systems should provide representations intelligible to human teammates so that attribution of intent and error is possible. Third, negotiability: teams must have protocols enabling humans to intervene, correct, or veto autonomous actions in a timely manner. These are not academic niceties. They are prerequisites for operational chains of accountability.
Standards bodies and international organizations are already part of this conversation. Discussions at technical and policy workshops referenced ongoing standardization efforts and frameworks that aim to codify human-machine teaming practices. The point here is crucial. Ethics that remain purely voluntary and diffuse will be outpaced by rapid fielding of autonomous-capable platforms. The alternative is a landscape where legal liability, commercial incentives, and jurisdictional variance create a patchwork of responsibility that does not protect human rights or safety.
There is a normative question about the kinds of human-robot relationships we want to foster. In healthcare and assistive settings the emphasis at IEAI panels was rightly placed on dignity, consent, and the preservation of agency for vulnerable users. In the military domain different values present themselves: mission accomplishment, force protection, and compliance with the laws of armed conflict. Yet these domains are not hermetically sealed. Technologies designed originally for logistics or surveillance rapidly migrate into more consequential roles. Ethical governance must therefore be anticipatory and cross-domain.
Finally, we return to politics. Calls for an international convention on AI and human rights will not succeed without broad coalitions that bring low and middle income states into the drafting room. The IEAI and partner organizations have taken important steps by bringing discussions to multilateral fora. But an ethical regime that is merely Western or techno-elite will fail those most exposed to harm. The task before us is to translate philosophical commitments into institutional designs that distribute expertise, oversight, and enforcement. This is a pragmatic demand and an ethical one.
In closing, human-robot teaming presents a suite of ethical problems that are resolvable only through combination of thoughtful philosophy, precise engineering, and robust governance. The IEAI conversations illustrate that progress is possible when those spheres talk to one another. The remaining work is heavy. It involves drafting standards that are technically sound, embedding rights-respecting audits into procurement, and building legal channels that make responsibility traceable. Without these, we risk ceding moral space to opaque systems whose errors we will inherit but whose designers and operators may escape accountability. The challenge is not merely to make machines that behave better. It is to make institutions that ensure those machines serve human flourishing and respect human rights.