On May 12, 2025, the alignAI consortium officially launched its activities with a vibrant Kick-off Event at the TUM Think Tank in Munich. The event marked the beginning of a three-year journey to align large language models (LLMs) with fundamental human values such as fairness and explainability, across sectors like education, mental health, and news consumption.
Hosted by the Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich, the Kick-off brought together academic and industry leaders, researchers, and the 17 new Doctoral Candidates (DCs) who will form the heart of the project.
A Vision for Responsible AI Development
The event opened with a welcome note and introduction to the IEAI by Prof. Christoph Lütge, followed by an overview of the Marie Skłodowska-Curie Doctoral Network and the broader alignAI vision by Caitlin Corrigan and Auxane Boch. Together, they highlighted the interdisciplinary and international ambition of the program, which brings together five universities, two research institutes, and four industry organisations. With €3.5 million in EU funding, alignAI is set to bridge technical research and societal values through interdisciplinary, cross-border collaboration.
Introducing the Heart of alignAI: Our Doctoral Candidates
The core of the evening featured introductions by the Principal Investigators and, most importantly, the Doctoral Candidates. Looking at the multidimensional project map that spans across disciplines and use cases, each DC gave a two-minute pitch introducing themselves and their research.
The pitches revealed the truly interdisciplinary and international character of the alignAI network, bringing together expertise from industrial design, developmental psychology, and computational linguistics to public health and human rights. These presentations showcased alignAI’s commitment to interdisciplinary teamwork, pairing technical and social science perspectives to tackle real-world challenges in development, governance, and design. Project topics ranged from participatory design of LLM tools for vulnerable groups and legal guardrails for value alignment, to AI-assisted mental health tools, and systems-level robustness and safety in adversarial settings. Rather than silos, the interdisciplinary and cross-cutting set up of the project allows the DCs to work together and collaborate across use cases.
Closing Words and Building Community
The formal program concluded with closing words by Prof. Urs Gasser, who warned about the discrepancy between the rapid development of ethical standards, frameworks, and reports on the one hand and the lack of implementation and enforcement on the other, emphasising the importance of bridging values and practice. He reminded us that technologies like LLMs are “moving targets,” and our ability to keep pace ethically depends on open, interdisciplinary dialogue.
Following the presentations, the guests came together over food and drinks for an evening of conversation and connection, getting to know the faces that will accompany them on this shared journey.
What’s Next?
The Kick-off transitions into a full Seasonal School from May 13–16, featuring workshops on responsible research practices, participatory AI, and deep dives into alignAI’s three use cases. The week will conclude with student group pitches on how to collaborate effectively across disciplines and institutions.
Stay tuned via alignAI website or follow us on LinkedIn as we continue our mission to shape responsible AI through collaboration, innovation, and impact.