What inspired you to join the alignAI project?
When I first read the description of the alignAI project, and particularly the position I applied for, I immediately thought about how it resonated with both my multidisciplinary background and my personal interests concerning AI alignment. I deeply believe that similar challenges, such as aligning large language model (LLM) technologies with human values in contexts like education, mental health and online news, need to be tackled as soon as possible to ensure an ethical and responsible use of these technologies. In a field where new tools emerge faster than we can adapt to the old ones, it’s crucial to keep a human-centred approach at the forefront. This ensures that all users, regardless of background, can benefit from these advancements. The real-world consequences of neglecting responsible AI implementation are already visible as these technologies become more embedded in our daily lives. That’s why I decided to contribute to a project that closely reflects my personal values and expertise.
What is the focus of your research within alignAI?
My work focuses on the online news use case, particularly concerning the design and evaluation of an LLM-based tool that can support both online news creators and consumers. The research will encompass participatory methods to ensure that explainable and transparent AI approaches are responsibly integrated into the design and development workflow, operating at the intersection of state-of-the-art techniques and user-based insights, aiming to create tools that are not only technically robust but also aligned with the real needs, values and expectations of diverse user groups.
What excites you most about working at the intersection of AI and media?
The online news use case comes with an interesting and exciting mix of challenges and perspectives. At the core of the research is the collaboration with partners from the online media sector, focusing on both local and broader-scope news. This will allow us to gather insights from a diverse pool of stakeholders and users, with direct feedback from both creators and users. This diversity will also demand a thorough understanding of the varying needs and expectations of different user groups, be they news creators or readers, or members of a local or broader community. These dichotomies lie at the core of the research on the responsible use of LLMs for online news generation and consumption.
How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?
I truly believe the added value of this project lies in its multidisciplinary nature: the more technology intersects with our lives, the more its understanding requires inputs from different backgrounds. AI research particularly needs to embrace this approach to ensure a responsible implementation, benefiting not only from researchers in technical fields but also from social sciences, philosophy, ethics, design and legal studies. Neglecting these perspectives risks accelerating the irresponsible or unethical implementation of AI technologies. As a researcher, I’m excited to learn from the diverse perspectives of colleagues working across various disciplines.
If you had to explain your research to a friend outside academia, how would you describe it?
As my research focuses on something that we frequently interact with, such as online news, I would tell them that I aim to contribute to the safe and responsible implementation of AI technologies in this context, with input from researchers in different fields. Misinformation, information overload, filter bubbles and biases are just a few examples of well-known problems that already shape the experience of the average news reader. As the pace of change accelerates and the AI race shows no signs of slowing, it’s important to be prepared to safely and responsibly integrate new technologies into this complex scenario that is part of our daily lives.
Where can people follow your work?
You can follow my work on LinkedIn.
Image Attribution
Enhanced by: Krea AI
Date: 16/07/2025