What inspired you to join the alignAI project?
From my previous work on AI projects, I quickly realised that I’ve had two strong specific interests: AI applications for personal informatics (especially mental health and wellbeing) and the responsible and trustworthy aspects of these applications, particularly concerning fairness and explainability. Thus, with my computer science background, I mostly focused on building models for mental health and wellbeing tasks, generating explanations and conducting fairness checks. However, I often struggled to interpret model outcomes, variable correlations and fairness disparities due to my lack of domain knowledge (i.e. medical and legal). My alignAI Doctoral Candidate position excites me because it perfectly combines my interests while offering the chance to collaborate closely with the experts I’ve always sought to work with.
What is the focus of your research within alignAI?
My research within alignAI is focused on the development and technical evaluation of a value-aligned, agentic AI tool for supporting mental health. I aim to create a responsible and value-aligned-by-design tool, using state-of-the-art agents, whilst the whole architecture, implementation and evaluation shall prioritise multiple trustworthy AI aspects: from fairness considerations to legal and regulatory standards and from psychological standards to human-in-the-loop approaches for risk modelling in public health.
What excites you most about working at the intersection of AI and computer science?
AI has become part of nearly every aspect of our lives, mainly because there is data available on the internet for almost everything. However, what happens when certain human aspects and behaviours can’t be represented by data? Such as empathy, nuanced feelings and internalised stigma? Do we need to find ways to generate such data, or should we fundamentally change the AI tools we typically develop? Perhaps it’s a combination of both. Or is it neither? I hope that, through this three-year journey, we can leverage our different backgrounds and expertise to deliver an AI solution that is not only technically sound but, most importantly, prioritises human values and is truly made by and for the people who need it.
How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?
With the release of large language models (LLMs), the most powerful AI models ever developed, I believe the focus has shifted beyond simply building newer, more capable technology. Can they even get much more capable? I feel the emphasis now lies on human-centred evaluation and value alignment, and this shift makes sense, given the challenge of understanding how these models generate their outcomes, and the resulting questions of trustworthiness and reliability. All these new directions demand interdisciplinary teams, combining domain, technical and societal expertise, whose collaboration in critical, high-risk domains like mental health isn’t just beneficial; it’s absolutely essential to ensure these tools are value-aligned by design and deliver a real, positive impact.
If you had to explain your research to a friend outside academia, how would you describe it?
I usually say something like: “Imagine a ChatGPT specifically designed for mental health support. It could help you with practical things, like understanding the process of booking an appointment or just being there when you need to talk to someone. It won’t ever tell you how to solve your problems directly, but it aims to understand your situation and feelings, helping you explore options or perspectives you might not have considered before”.
Where can people follow your work?
You can follow me on LinkedIn.