What inspired you to join the alignAI project?
During the last few years, AI has transformed society. Curiously enough, I was raised by a family of engineers who would repeatedly discuss the potential of artificial intelligence. Back then I dismissed it as futuristic, fictional and possibly even Orwellian. I forged my own path in medicine, then global mental health, but at the same time carried out an internship at a tech startup and later closely followed the conversations surrounding the EU AI Act. The alignAI project was especially appealing to me given that the value-alignment and ethical considerations are key to the project. Rapid transformation is driven by innovation, which has risks and may affect the most vulnerable. alignAI intends to mitigate those risks and ensure that AI evolves responsibly, placing human and societal values at the centre.
What is the focus of your research within alignAI?
Using mixed methods research, I will identify the values, needs and preferences that LLMs should align with, in the context of an AI digital tool. More specifically, I will look at how value-aligned AI tools impact youth and caregiver access, satisfaction and engagement with child and adolescent mental health services. I will start by exploring caregiver and youth experiences with AI-based digital tools for mental health, co-creating an AI-based digital tool with stakeholder perspectives on opportunities and risks associated with implementing AI technology into existing systems of care and evaluate the AI-based tool prototype. The feedback I will gather from users will directly shape and improve the LLM, promoting an iterative development process that places the end user and stakeholders at the core.
What excites you most about working at the intersection of AI and mental health?
Although mental health has been my vocation for a long time, only recently has it been given much more attention by society. On the one hand, (especially considering the stigma associated with mental health) this is extremely positive but on the other, it makes it an easily exploitable field. The demand for mental health services and support by far exceeds what countries can provide, which opens the door for industry solutions-some of which may prioritise profit over people. Research reflects the benefits of these innovations; however, at this speed, academia is struggling to keep up with the commercialisation of digital mental health tools. As a researcher, I am aware and excited by the potential of this field but am critical and cautious of the risks and ethical implications. AI and mental health will be at the forefront of big advancements in society and I want to ensure that the mental health and wellbeing of the person or patient is the goal, not just AI and mental health innovation in itself.
How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?
I have always valued interdisciplinary collaboration, even though it comes with its challenges and communication barriers. I believe in cross-disciplinary and cross-sectoral collaboration, and that working in silos limits our work and potential. People come from different backgrounds and lived experiences. All forms of knowledge stimulate creativity, new ideas, perspectives and ways of looking at the same problem. AI can be applied in a vast number of ways, but expert knowledge of the field of application is key to maximizing its benefits.
If you had to explain your research to a friend outside academia, how would you describe it?
I’m researching how to make AI tools that actually help children and families when they’re going through tough times. Right now, there are a lot of digital mental health apps and tools out there, but not all of them are appropriate or even safe. So, I’m talking directly to everyone involved: families, children, doctors and teachers etc., to understand what they need and what kind of help they’re looking for. Then, I’m using what they tell me to help design a smart AI tool that feels right to use, one that’s respectful, trustworthy and useful. I would say: “Think of it like building an app with people, not just for them and making sure the AI behind it is learning the right things along the way”.
Where can people follow your work?
You can follow me on LinkedIn.