The phrase psychological training usually evokes images of human therapy, role play, or resilience drills. When applied to AI teammates the phrase must be recast. Psychological training for artificial agents is not therapy for silicon. It is the deliberate engineering of cognitive and communicative faculties that allow an AI to construct, maintain, and repair the kinds of shared mental models that make human teams resilient and humane. This is a technical project with a moral dimension: without psychological competence, an AI cannot be a proper teammate; with superficially plausible social behaviour it can become a dangerous mimic that corrodes accountability and judgment.

Two complementary aims should govern any programme of psychological training for AI teammates. First, the AI must learn to infer human mental states and task models accurately enough to predict needs, offer relevant interventions, and avoid harmful surprises. Second, the AI must learn how to shape human mental models about the AI itself: communicating capabilities, limitations, intent, and uncertainty in ways that enable calibrated trust rather than obedience or dismissal. Both aims are active areas of research and have been explicit goals in defense and HRI programmes that seek machine theory of mind and shared model maintenance.

From an engineering standpoint the first aim maps to building explicit Theory of Mind modules, situation-awareness estimators, and belief-state inference systems. Work in safety-critical domains has shown that Bayesian and probabilistic approaches can infer misalignments in team mental models and detect when members diverge in ways that threaten task success. Those methods point the way toward AI “coaches” or teammates that can flag misalignment and either correct their own behaviour or prompt human teammates for clarification. Psychological training must therefore include massive, context-rich simulation of interactive scenarios where the agent practices inferring and repairing divergent beliefs under time pressure and partial observability.

The second aim is about communication design rather than inference. Human teams succeed when members share representations of the mission, the environment, and each other’s roles. Empirical HRI work confirms that natural language, timely signaling, and transparent representations of intent measurably improve performance and predictability when they form part of a shared mental model. Thus an AI must be trained not only to compute but to explain, to declare its confidence, and to decline when appropriate. Training curricula should include graded transparency primitives: declarative intent, concise rationale, and calibrated uncertainty messages that trade off cognitive load and situational need. The goal is calibrated trust rather than maximal trust.

Two recurring findings should shape any training regimen. One, reliability and transparency are distinct bases for trust and they influence human reliance in different ways. High reliability without meaningful transparency produces complacency or brittle dependence; transparency without reliability can increase human vigilance but also cognitive burden. Psychological training must therefore teach AI teammates when to be more explicit and when to remain concise, and it must practice those choices in ecological scenarios so that humans can learn predictable cues to follow. Two, interactive channels such as gaze proxies, succinct visual cues, or short natural-language alerts materially improve team fluency when they are used consistently. Training should pair inference tasks with multimodal communication exercises so that an AI learns how to use the most effective channel given the operation tempo and human workload.

Concretely, a psychological training curriculum for AI teammates should include these elements: (1) Synthetic socialization in diverse team simulations that expose the agent to a variety of human styles, mistakes, and cultural norms; (2) Theory of Mind exercises where the agent repeatedly infers nested beliefs and is penalized for failures that lead to coordination breakdowns; (3) Explanation and humility training where the agent learns to state confidence intervals, typical failure modes, and corrective actions in concise, actionable language; (4) Trust-calibration drills with adaptive transparency policies that practice escalating explanations when misalignment is detected; and (5) Adversarial scenarios that train the agent to recognise and resist exploitation or ambiguity that could induce hazardous automation bias. Each element requires careful metrics: measures of shared mental model alignment, rates of proper human intervention, and task performance under perturbation are better targets than raw accuracy alone.

Philosophically we must be wary of two temptations. The first is anthropomorphism by design: dressing an AI in social behaviour because it is pleasant or persuasive rather than because it supports task goals. An AI that smiles its way into authority is a failure of moral design. The second is abdication: treating psychological training as a substitution for institutional responsibility. Even a perfectly socialized AI remains a tool with design, deployment, and legal vectors that require human oversight. Psychological competence in an AI should enable better human decision making not replace it.

Finally, evaluation and oversight must be continuous. Real-world teams, especially in contested or safety-critical domains, will expose edge cases that simulations cannot fully predict. Programs such as those that have explicitly pursued machine ToM and shared mental models show what is possible, but they also underline that models must be stress-tested in diverse, adversarial, and ethically supervised settings before being fielded. Psychological training for AI teammates is a necessary and urgent project. It promises to make human-machine teams more effective and less dangerous, but only if engineers, ethicists, and commanders treat social competence as a measurable engineering objective subject to rigorous evaluation rather than as a marketing gloss.