What inspired you to join the alignAI project?
As a research fellow in Italy and later as an NLP consultant, I observed how different organisations adopt AI technologies. In industry, there’s often a rush to integrate LLMs into products without fully considering the risks, like biased data or unclear decision-making processes. Companies frequently adopt AI simply because competitors are doing it, overlooking important ethical implications. This contrast with academia’s more reflective approach drew me to the alignAI project. It provides space to thoughtfully examine how we can ensure AI technologies align with human values to promote more responsible AI development.
What is the focus of your research within alignAI?
As a doctoral candidate at EPFL, I research how LLMs are changing news creation and consumption within the alignAI project. My work combines qualitative and quantitative methods to identify emerging values and practices in AI-driven news. I examine this from two perspectives: how news creators use LLMs (focusing on values like truthfulness and transparency) and what values news consumers expect from AI-mediated content (including personalisation and algorithmic awareness). My research involves engaging directly with diverse communities to understand how these dynamics vary across different socio-economic backgrounds and contexts, including local versus global news and sensitive content handling. The goal is to help develop news technologies that are efficient, ethical, inclusive and trustworthy.
What excites you most about working at the intersection of AI and media?
We’re experiencing a technological revolution that’s reshaping how information is produced and consumed. Being a part of this transformation feels like both a privilege and responsibility. What excites me most is contributing to how AI integrates into the news ecosystem. Since news deeply influences public opinion and democratic discourse, ensuring AI tools uphold values like transparency and fairness is crucial. I’m particularly inspired by working directly with journalists and media professionals, gathering insights from both creators and consumers. It’s rewarding to see academic research translate into practical changes in how news is created and delivered.
How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?
Interdisciplinary collaboration defines the alignAI project. Our network includes researchers from philosophy, law, computer science, engineering, medicine, humanities and more. This diversity creates a rich, multifaceted approach to aligning AI with human values. No single field can solve AI alignment challenges alone. We need technical innovation, philosophical insight for ethical trade-offs, legal expertise for regulation and social science perspectives grounded in human experience. This collaboration enhances both the depth and relevance of our research. Looking beyond the project, I believe this kind of interdisciplinarity is essential for building a future where AI systems are not only powerful, but also responsible and inclusive. If we want AI that truly serves society, we must bring a wide range of voices into the conversation. The more perspectives we include, the more meaningful and ethical our progress will be.
If you had to explain your research to a friend outside academia, how would you describe it?
I study how tools like GPT are changing news consumption and creation, ensuring they don’t cause harm. I examine how journalists use these tools to write articles and what news readers expect from these technologies. While these tools are powerful, they can be biased or misleading. I talk to journalists and readers to understand what they value, like truthfulness and transparency, then work to ensure AI respects those values.
Where can people follow your work?
You can follow me on LinkedIn.