From Checklists to Care: Rethinking “Ethical AI” in Mental Health

Author – Simay Toplu

“Is it fair?” “Is it explainable?” “Is it safe?”

These are the questions commonly used to evaluate AI systems. In mental health, they seem especially relevant. Ethical guidelines, audit tools and compliance checklists promise trustworthy AI. But how helpful are these tools when emotional nuance and personal vulnerability come into play?

AI ethics frameworks are growing in number and influence (Jobin et al., 2019). Across industry, academia and regulation, a common vocabulary has emerged: fairness, transparency and safety. Ethical toolkits are widely promoted as ways to design more responsible systems (Morley et al., 2020). Yet when applied to AI used in mental health support, this technical framing may fall short even in non-clinical contexts.

People don’t seek mental health support just to receive information. They often find themselves struggling with uncertainty, emotional distress or practical challenges that can feel overwhelming. When these moments are met by chatbots or digital companions, technical reliability alone may not provide meaningful support. A system might follow every rule and still leave someone feeling misheard or misunderstood. In emotional domains, standards don’t always capture how impact is felt.

When Ethics Becomes Optics

Ethical AI tools offer structure and accountability. But when used without deeper reflection, they can sometimes become performative. Companies highlight fairness dashboards or bias evaluations as evidence of responsibility (Metcalf et al., 2021). Words like “trustworthy” or “human-centred” appear frequently in mission statements. Beneath this surface, important questions can remain unresolved:

Whose values shape these tools?

What types of harm are not visible in assessments?

How are emotional and relational risks being addressed?

These questions feel especially relevant in the mental health context. Systems that respond to distress with calming phrases may seem helpful. But if these interactions encourage avoidance, reinforce self-blame or foster emotional dependence, their long-term impact may be less supportive than it appears.

Most evaluations still prioritise performance metrics and demographic parity (Matos et al., 2025). These benchmarks can offer useful insights, but rarely capture whether someone felt understood, whether context was acknowledged or whether the exchange supported a sense of control. These dimensions matter and yet often go unmeasured.

Reframing the Ethical Lens

Rather than depending solely on checklist-driven assessments, developers and researchers working with AI in mental health may benefit from a broader lens. This includes recognising that emotional and social factors play a central role in how AI is experienced.

Designing with care means anticipating how people might interact with AI tools during vulnerable moments (Axelsson et al., 2024). Tone, pacing and clarity all shape how a system is received. Supportive interactions can unintentionally become forms of emotional dependence if boundaries are unclear. Similarly, technical explanations that make sense to developers might confuse users who are looking for help.

Ethical work in this space benefits from an ongoing and inclusive process. Listening to the lived experiences of users, especially those underserved by existing systems, can reveal ethical tensions that technical metrics don’t capture.

Mental health is understood differently across cultures and communities (Gopalkrishnan, 2018). What feels empathetic in one setting may seem inappropriate in another. Language, tone and cultural norms matter. Ethical alignment means embracing this complexity.

Accountability That Goes Deeper

Meeting technical requirements is a valuable step. But ethical development involves more than compliance. Supporting well-being requires clarity about purpose, limitations and shared responsibility.

When AI tools are designed to support decision-making or help users reflect on complex experiences, those intentions should be clearly communicated. Evaluation should include not only technical benchmarks but also feedback from real users (Thieme et al., 2023).

Accountability in this space includes building pathways for feedback, options for escalation and mechanisms that allow users to understand and question how the system works. Attention to long-term impact matters as much as early performance.

Mental Health as an Ethical Priority

AI systems used in the mental health space interact with deeply personal experiences (Siddals et al., 2024). Responses don’t happen in isolation; they can shape how people feel about themselves, their choices and their relationships. Ethical development in this space benefits from a mindset of support and care.

Frameworks, principles and audits are all part of the process. But what makes ethics meaningful is often the willingness to engage with uncertainty, listen to diverse perspectives and adapt based on what people actually need.

Expanding ethical AI in mental health doesn’t require abandoning the tools we have. It requires building on them, asking deeper questions, welcoming wider input and staying open to complexity.

When someone seeks support, they deserve more than functionality. They deserve systems that respond with care and respect.

References:

Axelsson, M., Spitale, M., & Gunes, H. (2024). Robots as mental well-being coaches: Design and ethical recommendations. ACM Transactions on Human-Robot Interaction, 13(2), 1–55. https://doi.org/10.48550/arXiv.2208.14874.

Gopalkrishnan, N. (2018). Cultural diversity and mental health: Considerations for policy and practice. Frontiers in Public Health, 6, 179. https://doi.org/10.3389/fpubh.2018.00179.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2.

Matos, J., Van Calster, B., Celi, L. A., Dhiman, P., Gichoya, J. W., Riley, R. D., … Collins, G. S. (2025). Critical appraisal of fairness metrics in clinical predictive AI [Preprint]. arXiv. https://arxiv.org/abs/2506.17035.

Metcalf, J., Moss, E., Watkins, E. A., Ranjit, S., & Elish, M. C. (2021). Algorithmic impact assessments and accountability: The co-construction of impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 735–746). https://doi.org/10.1145/3442188.3445935.

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5.

Siddals, S., Torous, J., & Coxon, A. (2024). “It happened to be the perfect thing”: Experiences of generative AI chatbots for mental health. npj Mental Health Research, 3(1), 48. https://doi.org/10.1038/s44184-024-00097-4.

Thieme, A., Hanratty, M., Lyons, M., Palacios, J., Marques, R. F., Morrison, C., & Doherty, G. (2023). Designing human-centered AI for mental health: Developing clinically relevant applications for online CBT treatment. ACM Transactions on Computer-Human Interaction, 30(2), Article 27, 1–50. https://doi.org/10.1145/3564752.

Further reading/watching/listening:

Books & Articles:

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature machine intelligence, 1(11), 501-507. https://doi.org/10.1038/s42256-019-0114-4

Tavory, T. (2024). Regulating AI in mental health: ethics of care perspective. JMIR Mental Health, 11(1), e58493. https://doi.org/10.2196/58493

 

Image Attribution

Generated by: ChatGPT
Date: 01/07/2025
Prompt: “Can you generate an image for my blog post about ethical AI in mental health? Maybe something like a person looking at their phone chatting with a chatbot. No text on the image.”

Contact Us

FIll out the form below and we will contact you as soon as possible