alignAI
Aligning LLM Technologies with Societal Values
About alignAI
About the project
The alignAI Doctoral Network will train 17 doctoral candidates (DCs) to work in the international and highly interdisciplinary field of LLM research and development. The core of the project focuses on the alignment of LLMs with human values, identifying relevant values and methods for alignment implementation. Two principles provide a foundation for the approach. First, explainability is a key enabler for all aspects of trustworthiness, accelerating development, promoting usability, and facilitating human oversight and auditing of LLMs. Second, fairness is a key aspect of trustworthiness, facilitating access to AI applications and ensuring equal impact of AI-driven decision-making. The practical relevance of the project is ensured by three use cases in education, positive mental health, and news consumption. This approach allows us to develop specific guidelines and test prototypes and tools to promote value alignment. We follow a unique methodological approach, with DCs from social sciences and humanities “twinned” with DCs from technical disciplines for each use case (9 DCs in total), while the other 8 DCs carry out horizontal research across the use cases.
About Large Language Models
Large Language Models (LLMs) are trained on broad data, using self-supervision at scale, to complete a wide range of tasks. Wider use of LLMs has risen in recent months due to applications such as ChatGPT. Although LLMs bring many opportunities to improve our everyday lives, the impacts on humans and society have not yet been prioritized or fully understood. Given the rapid development of these tools, the risk of negative implications is significant if LLMs are not developed and deployed in a way that is aligned with human values and responds to individual needs and preferences. To mitigate any negative consequences, academia, in close collaboration with industry, needs to train the next generation of researchers to understand the complexities of the socio-technical implications surrounding the use of LLMs.
Project Map
The alignAI project is built around a highly interdisciplinary training program
and research methodology designed to achieve the DN’s five research objectives:
- O1. Establish a unique doctoral training programme (i) equipping DCs with the capacity to work in interdisciplinary environments, (ii) providing high quality scientific training, (iii) equipping DCs with communication capacities and (iv) Disseminating knowledge beyond the beneficiary institutions
- O2. Identify the human values and user requirements/preferences that LLMs should align with
- O3. Explore implementable ways for applying the principles of explainability (XAI) and fairness in
the specific context of LLM use to enable alignment with values identified in RIO1 - O4. Design and build value aligned LLM prototype tools based on outcomes from RIO1 and RIO2
- O5. Test & validate the technical prototype tools from RIO3 and the non-technical
tools/methods/models from RIO1 and RIO2 - O6. Translate learnings from RIO1-RIO4 into research outputs, contextualising an “enabling
environment” for value-aligned LLMs
The doctoral training objective O1 is described in detail in Section 1.3.
The five research objectives will be addressed in a context-specific way throughout the project by
investigating them as part of three use-cases in: (i) Education, (ii) Positive Mental Health and (iii) Online
News Consumption. Fig. 3. presents the proposed research methodology. This is followed by a detailed
description of the research activities and their relevance for the project objectives.
Newsroom

LLMs as Tools in the Continuum of Human Cultural Evolution
Human culture is unique in how knowledge is transmitted and preserved, as distinguished from all other non-human cultures. This progress is referred to as the “ratchet effect” (Tomasello et al., 1993): a mechanism that faithfully conserves existing knowledge and skills within exchanges while also contributing new innovations. This dual process ensures the accumulation of cultural knowledge.

Q&A with DC Hewei Gao
What inspired you to join the alignAI project? I’m fascinated by what large language models (LLMs) could mean for education. In many ways, generative AI

Ctrl-Alt-Deterrence: Rethinking Stability in the Age of Cyber, AI and Autonomy
On February 13th, the IEAI co-hosted with Amerikahaus an official side event of the Munich Security Conference 2026 titled “Ctrl-Alt-Deterrence: Rethinking Stability in the Age of Cyber, AI and Autonomy”. The panel brought together leaders from defense, academia, policy and industry to explore how artificial intelligence and autonomous systems are reshaping deterrence theory and practice.

Renewing Craftsmanship in the Age of AI: Toward a Design Pedagogy of Care
Craftsmanship has long held a central place in art and design history. While often associated with form and aesthetics, its deeper emphasis lies in dedication, tradition and quality-in the care and attention given to the process of making. As technology accelerates and productivity becomes a dominant cultural value, design movements have emerged to resist this pace. Slow technology, for instance, encourages mindful engagement with products (Hallnäs & Redström, 2002), while speculative design and design fiction invite audiences to imagine alternative futures (Dunne & Raby, 2013; Bleecker, 2009). Yet both still centre primarily on the perception of the audience and on how the work is received or interpreted. Craftsmanship, by contrast, turns inward: it concerns the mode of practice, the values and sensibilities embodied by the maker in the act of creation. It asks not what is made, but how it is made, and how that process shapes the maker themselves. In the context of design education, this focus on practice makes craftsmanship particularly resonant: it cultivates an attitude, a rhythm and a sense of responsibility toward making that extends beyond outcomes.

Q&A with PI Jesse Benjamin
In this video interview, we speak with Professor Jesse Benjamin, Assistant Professor in Industrial Design at Eindhoven University of Technology. He shares how his work in design research and human-centred AI contributes to the alignAI network, reflects on supervising PhD researchers across design and machine learning and discusses his hopes for shaping AI systems that better reflect societal values and public concerns.

Q&A with PI Stephan Wensveen
In this video interview, we speak with Professor Stephan Wensveen, Professor of Transformative Design at Eindhoven University of Technology. He discusses his approach to guiding and supervising PhD candidates, how his work connects design research with emerging technologies and what message he believes should be shared with those outside academia who are concerned about the future of AI.