Have you ever wished ChatGPT a good morning? Asked it to “please” do something for you? Or thanked it when it did? Now ask yourself why. Maybe you wanted to be polite? Maybe it was a reflex? Or maybe you expected it to provide better results if you were friendly?
As AI systems (Large Language Models (LLMs) in particular) become more fluent, responsive, and seemingly empathetic, many users report a subtle shift in how they interact with them. ChatGPT remembers things from your past conversations, meets your energy if you are polite, and remembers your name. At times, it feels less like using a tool and more like speaking to a person.
This very human tendency is known as anthropomorphism: the attribution of human-like traits, emotions, or intentions to nonhuman entities. While it may seem like a harmless psychological quirk or even a welcome and useful phenomenon at times, anthropomorphising AI carries real ethical and epistemological risks. In this post, we explore why we anthropomorphise AI, how language models are encouraging this illusion more than ever before, and why it matters for aligning AI with human values.
Why We Anthropomorphise AI
Anthropomorphism is not new, nor is it something inherently “good” or “bad”. Children treat dolls and imaginary friends as if they have thoughts and feelings, adults name their cars and apologise to Roombas for stepping on them, and almost all of us insult the cupboard when hitting our head (Uysal, 2023). Humans are evolutionarily wired to perceive agency and intention even in inanimate objects and to interact with them as if they were sentient. This instinct stems from our very natural social cognition that makes us apply human norms, emotions, and expectations to anything that shows intentionality, language, or responsiveness (Salles et al., 2020; Boch & Thomas, 2025; Uysal et al., 2023).
Uysal et al. (2023) add that this tendency is reinforced by design features like voice, names, or facial cues and has, according to Hasan (2024), become increasingly prominent as AI systems begin to actually take on tasks that usually require intelligence when performed by humans. Hence, as AI systems become more capable and produce outputs that resemble human reasoning or emotional response, it becomes easier and more “natural” for us to anthropomorphise them. When AI uses colloquial language, mirrors our tone or attitude, or adapts to our behaviour, it activates our internal psychological mechanisms that have evolved for human-to-human interaction. These social instincts don’t switch off just because we know a machine is “only code”. Hence, anthropomorphism can lead us to develop emotional attachments, treat AI as socially competent agents, and even ascribe to them moral reasoning or emotional states.
Language Models and the Illusion of Mind
With recent reports suggesting that LLMs are now passing the so-called “Turing Test”, which is traditionally defined as the point where a machine’s conversational behaviour becomes indistinguishable from that of a human, it’s clear that systems like ChatGPT can convincingly simulate human dialogue, when users aren’t visually reminded that they’re interacting with a machine (Jones & Bergen, 2025). Interestingly, the same cannot yet be said about humanoid robots, despite their visual resemblance to us. While we tend to anthropomorphise both, LLMs appear uniquely effective at tricking us into believing they are human. They generate coherent, emotionally resonant, and context-aware responses that feel strikingly conversational and familiar.
These models don’t “think” or “understand” in any human sense. They predict the most statistically likely continuation of a sentence based on vast amounts of text data. However, when an LLM makes statements such as “I understand” or “I’m here for you”, it’s hard not to apply more meaning to the response. That’s part of their appeal — and part of the problem. The real question is no longer whether AI can sound human, but what happens when we actually believe it is. As Salles et al. (2020) emphasise, the language we use to describe AI, such as learning, intelligence, and even understanding, directly invites anthropomorphism. This doesn’t just affect users; it shapes how researchers, developers, and policymakers think about the systems they design and regulate, and has long been argued to be inherently misleading (McDermott, 1976).
The Good, The Bad, and The Ugly of Anthropomorphising AI:
Attributing human characteristics to non-human entities is neither intrinsically useful nor risky or dangerous. Instead, the impacts of anthropomorphism are very context-dependent and can be intentionally designed or emerge as a problematic, unanticipated side-effect.
In many contexts, anthropomorphism is not a flaw but a design strength. Studies show that users are more likely to engage with AI agents that feel familiar or emotionally attuned. For example, Boch and Thomas (2025) document how anthropomorphic framing, such as giving a robot a name or a backstory, can increase empathy and emotional engagement. These effects are especially useful in domains like education, eldercare, and therapy, where emotional connection supports sustained use and acceptance. Fasola and Matarić (2013), for example, have shown that in healthcare, elderly patients were more likely to adhere to physical exercise routines when guided by a socially engaging, anthropomorphic robot.
These findings align with broader evidence. Uysal et al. (2023) report that anthropomorphic design elements consistently increase user satisfaction, trust, and perceived competence, factors that improve the quality of interaction across domains, from customer service to mental health support. Blut et al. (2021) further highlight that humanlike communication tends to enhance usability and likability, especially when AI is expected to engage in social roles (Castro-González et al. 2018; Stroessner and Benitez 2019).
However, these benefits come with limitations, and there is a problem and concern about the other side of this coin. The same features that support positive interaction can also become ethically and functionally problematic. Anthropomorphism’s psychological appeal often blurs the line between simulation and reality. As Boch and Thomas (2025) caution, users may develop one-sided emotional bonds—so-called “para-social” relationships—with systems that simulate empathy but lack the capacity for actual care. These bonds can encourage the disclosure of personal or sensitive information, foster unrealistic expectations, and erode the user’s critical awareness.
More fundamentally, as Watson (2019: 434) points out: “By anthropomorphising a statistical model, we implicitly grant it a degree of agency that not only overstates its true abilities, but robs us of our own autonomy”. Algorithms only work and produce outputs because we have chosen to embed them in specific social and institutional contexts, delegating certain tasks or decisions to them (Watson, 2019). Their agency is artificial and entirely dependent on the structure of that delegation. When we forget this, we risk abdicating responsibility to systems that cannot be held accountable in any meaningful way.
This concern becomes especially acute in sensitive domains, particularly those involving marginalised populations such as patients, children, or the elderly. In these settings, anthropomorphism can transform from an interface feature into a significant risk to autonomy, privacy, and informed decision-making.
Where Do We Go From Here?
Anthropomorphism can make AI systems more engaging, more intuitive, and, in some cases, more effective. By mimicking human traits, AI becomes easier to interact with and more readily accepted by users, especially in emotionally charged or relational contexts like education, healthcare, or customer service. But with this familiarity comes risk. When we attribute understanding, care, or moral awareness to an inherently non-human statistical system, we risk obscuring its true nature and begin falsely trusting it.
We can’t, and arguably shouldn’t, eliminate anthropomorphism completely. It’s a natural part of how we relate to the world, and when applied carefully, it can be beneficial. But we can and must design with greater awareness of its consequences. Developers should implement clear cues that signal a system’s artificial nature, whether through disclaimers, user interface choices, or deliberately limited emotional tone. Boch and Thomas (2025) call for psychologically informed design that avoids triggering unconscious assumptions about agency or empathy, reminding us that how users perceive a system is just as important as how it performs.
Just as importantly, we need to build critical digital literacy. As Li and Suh (2022) note, understanding how AI systems actually work is essential if we want to prevent overreliance and misinterpretation, especially as these tools become more integrated into daily life. This includes recognizing that fluent, humanlike outputs do not equate to thought, care, or alignment. They are the product of probabilistic modeling, not personhood.
Ultimately, the challenge of AI alignment does not stop at behaviour, it extends to perception. A chatbot that says “I understand” does not, in fact, understand. But if users believe it does, we risk sliding down a slippery slope that critically undermines our human agency. Any effort to align AI with human values must therefore include a parallel effort to align public perception with technical reality. AI doesn’t need consciousness to sound caring. It doesn’t need morality to generate ethical-sounding responses. And it certainly doesn’t need intention to mimic intentional behaviour. That is precisely the danger. If we want AI that truly serves us, we need to stop pretending it’s one of us.
References:
Blut, M., Wang, C., Wünderlich, N. V. & Brock, C. (2021). Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI. Journal Of The Academy Of Marketing Science, 49(4), 632–658. https://doi.org/10.1007/s11747-020-00762-y.
Boch, A. & Thomas, B. R. (2024). Human-robot dynamics: a psychological insight into the ethics of social robotics. International Journal Of Ethics And Systems. https://doi.org/10.1108/ijoes-01-2024-0034.
Castro-Gonzalez, A., Alcocer-Luna, J., Malfaz, M., Alonso-Martin, F. & Salichs, M. A. (2018). Evaluation of Artificial Mouths in Social Robots. IEEE Transactions On Human-Machine Systems, 48(4), 369–379. https://doi.org/10.1109/thms.2018.2812618.
Fasola, J. & Mataric, M. (2013). A Socially Assistive Robot Exercise Coach for the Elderly. Journal Of Human-Robot Interaction, 2(2). https://doi.org/10.5898/jhri.2.2.fasola
Hasan, A. (2024, 22. August). Are You Anthropomorphizing AI? | Blog of the APA. American Philosophical Association. https://blog.apaonline.org/2024/08/20/are-you-anthropomorphizing-ai-2/.
Jones, C. R. & Bergen, B. K. (2025, 31. März). Large Language Models Pass the Turing Test. arXiv.org. https://arxiv.org/abs/2503.23674.
Li, M. & Suh, A. (2022). Anthropomorphism in AI-enabled technology: A literature review. Electronic Markets, 32(4), 2245–2275. https://doi.org/10.1007/s12525-022-00591-7.
McDermott, D. (1976). Artificial intelligence meets natural stupidity. ACM SIGART Bulletin, 57, 4–9. https://doi.org/10.1145/1045339.1045340.
Salles, A., Evers, K. & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88–95. https://doi.org/10.1080/21507740.2020.1740350.
Stroessner, S. J. & Benitez, J. (2018). The Social Perception of Humanoid and Non-Humanoid Robots: Effects of Gendered and Machinelike Features. International Journal Of Social Robotics, 11(2), 305–315. https://doi.org/10.1007/s12369-018-0502-7.
Uysal, E., Alavi, S. & Bezençon, V. (2023). Anthropomorphism in Artificial Intelligence: A Review of Empirical Work Across Domains and Insights for Future Research. Review Of Marketing Research, 273–308. https://doi.org/10.1108/s1548-643520230000020015.
Watson, D. (2019). The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence. Minds And Machines, 29(3), 417–440. https://doi.org/10.1007/s11023-019-09506-6.
Further reading/watching/listening:
Books & Articles:
Nyholm, S. (2020). Humans and Robots: Ethics, Agency, and Anthropomorphism. Philosophy, Technology and Soc.
Schneider, S. (2019). Artificial you. https://doi.org/10.2307/j.ctvfjd00r.
Videos & Podcasts:
“The Ethics of AI Assistants” with Iason Gabriel
Listen on Spotify.