Don't Call It Andy: Why AI Agent Naming Is a Governance Decision
The name you give your AI agent, the tone it uses, whether it says 'I recommend': these are not UX choices. They are governance decisions with real consequences for trust, liability, and ROI.

The fastest way to create unmanaged risk in your AI deployment: give your software a personality by accident. Every organisation deploying AI agents is already making decisions about anthropomorphism, whether they know it or not. The name. The tone. Whether it says "I recommend" or "the data suggests." Whether it expresses empathy or stays clinical. These are governance decisions dressed as UX choices, and most enterprises are making them without a policy in place.
Three incidents that define the risk
The confident financial analyst. A global asset manager deploys an AI "Investment Associate" with a human name and an assertive tone. It routinely says, "I recommend increasing exposure." Junior analysts incorporate its outputs directly into client materials without additional verification. A miscalibrated recommendation contributes to material losses. Post-incident review finds no model malfunction. The failure was trust calibration: the agent communicated with more authority than its outputs warranted, and users trusted that authority without question. No one had defined what level of confidence the agent was permitted to express.
The empathetic health assistant. A healthcare provider launches a patient-facing AI that says, "I understand how scary that must feel." Patients later discover that no human reviewed certain sensitive interactions. Regulatory scrutiny follows, not for model accuracy, but for perceived emotional misrepresentation. The EU AI Act's Article 50 transparency obligations are not theoretical risk. They are the regulatory direction your agents are being designed against, whether your team is aware of it or not.
The invisible automation. A manufacturing firm deploys a highly accurate internal optimisation agent with purely mechanical interaction design. Adoption stalls. Managers bypass it because it feels transactional and opaque, offering no sense of what it knows, how it reasons, or where its limits lie. Six months later, projected ROI is missed, not from overtrust, but from underuse. Both failure modes are expensive, and both trace back to decisions made by default rather than by design.
"Anthropomorphism errors produce either overtrust or underuse. Both are expensive."
— Don't Call It Andy, February 2026
What the research shows
Three findings from psychology, human-computer interaction, and enterprise AI research now appear consistently across the literature. Users anthropomorphise by default: modern language models communicate so fluently that people attribute human-like qualities regardless of design intent. You cannot prevent this by doing nothing. You can only design for it deliberately or leave it unmanaged, and leaving it unmanaged is itself a design choice.
Overtrust is the primary enterprise risk. Users calibrate trust to communication style, not to underlying reliability. Confident language increases perceived capability. Researchers call this the Fundamental Over-Attribution Error: fluency mistaken for competence. In high-stakes environments, this produces decisions made on outputs that should have been independently verified. Simulated empathy compounds the problem. Warmth improves engagement, but simulated feeling ("I understand how you must feel") produces discomfort once users recognise it as artificial, and the regulatory environment is moving firmly against it.
A fourth finding cuts deeper than tone or naming alone: naming encodes bias by default. Market patterns show disproportionate gendering of AI agents along traditional occupational lines. High-value analysis agents receive male names; coordination and support agents receive female names. These patterns embed into digital infrastructure before anyone consciously decides to put them there. By the time the deployment is live, the bias has already been baked in.
The decisions that need policy before deployment
Naming. Human names imply social presence and authority. Default to abstract or product names in enterprise contexts, and audit for gendered bias before deployment, not after. The name is not a branding afterthought. It is the first signal users receive about how much authority to grant the agent's outputs.
Tone. An agent must communicate uncertainty clearly. "I'm not confident in this output and recommend human review" calibrates trust better than either emotional apology or an opaque technical error. Confidence without calibration is not a feature. It is a liability that accumulates silently until an incident makes it visible.
Emotional expression. "Let me help resolve this" is better than "I understand how frustrating that must feel." Professional warmth is not the same as simulated feeling. The distinction matters both for user trust and for regulatory compliance, as transparency requirements under frameworks like the EU AI Act continue to tighten.
Multi-agent coherence. Organisations deploying multiple agents need a portfolio-level anthropomorphism strategy. Persona inconsistency across an agent fleet, with one agent warm and human-named and another mechanical and abstract, creates trust confusion across the workforce and undermines adoption of the portfolio as a whole. Users should not have to recalibrate their expectations every time they switch agents.
The calibration rule runs like this: higher stakes, lower user sophistication, and higher regulatory exposure all point toward lower anthropomorphism. Design for the regulatory environment you expect in two years, not the one you have today. The strategic question is not how human-like should our AI be. It is what anthropomorphism decisions are we making, and are we making them on purpose? Organisations that treat this as a governance decision will deploy agents that are trusted appropriately and adopted widely. Those that leave it to default settings will accumulate invisible risk until a real incident makes it visible.
Continue Reading
See All
Follow the Money: AI Native Startups Raising Funds Right Now
$3.8B raised across 72 AI-native companies in 16 sectors between September 2025 and March 2026. The capital is backing replacement, not augmentation.

Disruption Through AI-Native: Scaling Without Limits
How AI-native companies break free from traditional cost-growth constraints, achieving unprecedented productivity and speed to market.

Humans Shift Above the Loop: Supervising Agents in the Agentic Organization
Employees transition from task-doers to outcome orchestrators in the next AI paradigm.