
What inspired you to join the alignAI project?
My previous work with LLMs opened my eyes to both their impressive capabilities and concerning limitations. While developing LLM-powered agents for machine translation, I not only witnessed how these systems could break down barriers in communication, but also how they generated biased content. My experience made me passionate about ensuring AI systems align with human values. AlignAI’s mission to create ethical LLMs that benefit society aligns with my goal. The project’s interdisciplinary approach also attractive to me, as I believe that combining techniques with insights from other disciplines is essential for responsible AI development.
What is the focus of your research within alignAI?
My research focuses on developing personas for aligned LLM companions. I’m investigating how to identify diverse user personas and create LLM companions that adapt to individual preferences in different cultural contexts. This work is based on my previous experience with LLM-powered agents and fine-tuning models for specialized applications. I’m particularly interested in how LLM companions can maintain consistency in their values while still being flexible to serve users with different needs and backgrounds.
What excites you most about working at the intersection of AI and education?
Working at the intersection of AI and education, I’m excited by the possibility of creating AI companions that not only deliver information, but also support the whole learning process. What’s particularly exciting is finding the balance where AI supports education without replacing the human elements of learning. I believe education will be a nice testing ground for aligned AI—we’re shaping how future generations will learn.
How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?
Working with people from diverse disciplines could change how I think about evaluating AI systems: our evaluation should be beyond accuracy metrics to measuring actual human outcomes. The collaboration could also help me identify blind spots in how I was approaching value alignment. The complex challenges of creating beneficial AI can’t be solved from pure technical perspectives.
If you had to explain your research to a friend outside academia, how would you describe it?
I’m teaching AI to better play its role. It’s like having a friend who never remembers your preferences. My research is about creating AI companions that can recognise what’s suitable for you and what’s important to you. Whether that’s detailed explanations, creative suggestions or practical advice, we should adapt AI accordingly. But there’s a challenge: these systems also need to maintain certain principles like being fair. So I’m developing ways for AI to be both personalised and trustful at the same time.
Where can people follow your work?
You can follow me on LinkedIn