alignAI

Aligning LLM Technologies with Societal Values

About alignAI

About the project 

The alignAI Doctoral Network will train 17 doctoral candidates (DCs) to work in the international and highly interdisciplinary field of LLM research and development. The core of the project focuses on the alignment of LLMs with human values, identifying relevant values and methods for alignment implementation. Two principles provide a foundation for the approach. First, explainability is a key enabler for all aspects of trustworthiness, accelerating development, promoting usability, and facilitating human oversight and auditing of LLMs. Second, fairness is a key aspect of trustworthiness, facilitating access to AI applications and ensuring equal impact of AI-driven decision-making. The practical relevance of the project is ensured by three use cases in education, positive mental health, and news consumption. This approach allows us to develop specific guidelines and test prototypes and tools to promote value alignment. We follow a unique methodological approach, with DCs from social sciences and humanities “twinned” with DCs from technical disciplines for each use case (9 DCs in total), while the other 8 DCs carry out horizontal research across the use cases.

About Large Language Models

Large Language Models (LLMs) are trained on broad data, using self-supervision at scale, to complete a wide range of tasks. Wider use of LLMs has risen in recent months due to applications such as ChatGPT. Although LLMs bring many opportunities to improve our everyday lives, the impacts on humans and society have not yet been prioritized or fully understood. Given the rapid development of these tools, the risk of negative implications is significant if LLMs are not developed and deployed in a way that is aligned with human values and responds to individual needs and preferences. To mitigate any negative consequences, academia, in close collaboration with industry, needs to train the next generation of researchers to understand the complexities of the socio-technical implications surrounding the use of LLMs.

Chat AI screen (source: Canva)

Participating Organisations

Project Map

The alignAI project is built around a highly interdisciplinary training program
and research methodology designed to achieve the DN’s five research objectives:

  • O1. Establish a unique doctoral training programme (i) equipping DCs with the capacity to work in interdisciplinary environments, (ii) providing high quality scientific training, (iii) equipping DCs with communication capacities and (iv) Disseminating knowledge beyond the beneficiary institutions
  • O2. Identify the human values and user requirements/preferences that LLMs should align with
  • O3. Explore implementable ways for applying the principles of explainability (XAI) and fairness in
    the specific context of LLM use to enable alignment with values identified in RIO1
  • O4. Design and build value aligned LLM prototype tools based on outcomes from RIO1 and RIO2
  • O5. Test & validate the technical prototype tools from RIO3 and the non-technical
    tools/methods/models from RIO1 and RIO2
  • O6. Translate learnings from RIO1-RIO4 into research outputs, contextualising an “enabling
    environment” for value-aligned LLMs
    The doctoral training objective O1 is described in detail in Section 1.3.
    The five research objectives will be addressed in a context-specific way throughout the project by
    investigating them as part of three use-cases in: (i) Education, (ii) Positive Mental Health and (iii) Online
    News Consumption. Fig. 3. presents the proposed research methodology. This is followed by a detailed
    description of the research activities and their relevance for the project objectives.
alignAI Project Map
alignAI Project Map

Newsroom

Insights from the AI in Science Summit 2025

Insights from the AI in Science Summit 2025

On 3-4 November 2025, Doctoral Candidates Katerina Drakos and Eva Paraschou participated in the AI in Science Summit 2025 (AIS25) in Copenhagen, Denmark. The summit served as a premier gathering for scientists, industry leaders, investors and policymakers to discuss how AI is revolutionising scientific discovery and how Europe can spearhead this shift through a responsible, values-driven approach.

Read More »
How Do LLMs Reason? The Power of Thinking Longer and Test-time Scaling

How Do LLMs Reason? The Power of Thinking Longer and Test-time Scaling

For years, the industry has focused on making models bigger. This training time scaling (Kaplan et al., 2020) made models highly fluent, similar to a student who memorised the entire textbook. But fluency is not the same as reasoning. Large language models (LLMs) still struggle with complex logic, maths or coding tasks because they respond too quickly, predicting the next word without truly thinking (McCoy et al., 2023).

Read More »
AI is Reshaping Regulatory Thinking

AI is Reshaping Regulatory Thinking

Trigger Warning/Disclaimer: This blog post mentions suicide. If you or someone you know is experiencing suicidal thoughts or a crisis, please reach out immediately for help. A hotline in your country can be found on befrienders.org.

AI is reshaping not only our social practices but also the foundations of regulatory thinking. The transformative power of AI has compelled regulators to adopt a regulatory learning process, shifting from static legal doctrine to an adaptive, learning-driven regulatory approach (Hadfield & Clark, 2023). This shift is driven by both the emergent challenges of AI and the motivation to devise laws that enable AI innovation while protecting against its potential risks (Smuha, 2019). As a result, we present some doctrine examples to argue that AI does not merely challenge existing legal rules but disrupts the obsolete assumptions underlying traditional regulations, making regulatory learning a structural necessity rather than a policy choice.

Read More »
Senior Researcher Santiago Hurtado

Q&A with Santiago Hurtado

In this interview, we speak with Santiago Hurtado, M.Ed., research assistant and doctoral student at TUM. He discusses what drew him to alignAI, the perspective his institute contributes, and how he supports doctoral candidates while giving them room to pursue their own ideas.

Read More »
PI Nicole Lønfeldt

Q&A with PI Nicole Lønfeldt

In this video interview, we speak with Dr. Nicole Nadine Lønfeldt, Senior Researcher at the Capital Region of Denmark. She discusses how the mental health use case within alignAI can lead to practical outcomes, reflects on supervising PhD researchers working across clinical and technical domains and shares a message for those outside academia who are concerned about the future of AI.

Read More »
LLMs as Tools in the Continuum of Human Cultural Evolution

LLMs as Tools in the Continuum of Human Cultural Evolution

Human culture is unique in how knowledge is transmitted and preserved, as distinguished from all other non-human cultures. This progress is referred to as the “ratchet effect” (Tomasello et al., 1993): a mechanism that faithfully conserves existing knowledge and skills within exchanges while also contributing new innovations. This dual process ensures the accumulation of cultural knowledge.

Read More »

Contact Us

FIll out the form below and we will contact you as soon as possible