LLMs as Tools in the Continuum of Human Cultural Evolution

LLMs as Tools in the Continuum of Human Cultural Evolution

Human culture is unique in how knowledge is transmitted and preserved, as distinguished from all other non-human cultures. This progress is referred to as the “ratchet effect” (Tomasello et al., 1993): a mechanism that faithfully conserves existing knowledge and skills within exchanges while also contributing new innovations. This dual process ensures the accumulation of cultural knowledge.

Q&A with DC Hewei Gao

What inspired you to join the alignAI project? I’m fascinated by what large language models (LLMs) could mean for education. In many ways, generative AI can help democratise access to knowledge-suddenly, more people can get explanations, feedback and learning support anytime they need it. But this power also comes with real risks, especially when the […]

Q&A with PI Jesse Benjamin

PI Jesse Benjamin

In this video interview, we speak with Professor Jesse Benjamin, Assistant Professor in Industrial Design at Eindhoven University of Technology. He shares how his work in design research and human-centred AI contributes to the alignAI network, reflects on supervising PhD researchers across design and machine learning and discusses his hopes for shaping AI systems that better reflect societal values and public concerns.

Q&A with PI Stephan Wensveen

PI Stephan Wensveen

In this video interview, we speak with Professor Stephan Wensveen, Professor of Transformative Design at Eindhoven University of Technology. He discusses his approach to guiding and supervising PhD candidates, how his work connects design research with emerging technologies and what message he believes should be shared with those outside academia who are concerned about the future of AI.

Tying the Knots of Trust: Understanding the Evolving Sociotechnical Ecosystem of Trust in LLMs

Tying the Knots of Trust: Understanding the Evolving Sociotechnical Ecosystem of Trust in LLMs

When we interact with a chatbot, ask a digital assistant for advice or rely on LLMs to summarise a long document, we are doing something profoundly human: we are trusting. Trust is part of what makes cooperation possible between people, but increasingly, also between people and machines. In the age of artificial intelligence (AI), and particularly with the rapid rise of large language models (LLMs), trust has become a central issue.

Q&A with PI Line Clemmensen

PI Line Clemmensen

In this video interview, we speak with Professor Line Clemmensen, Professor of Machine Learning at the Technical University of Denmark. She shares what her team contributes to the project, her approach to supervising PhD candidates, and how she helps them grow in both technical expertise and ethical responsibility.

Q&A with PI Sneha Das

PI Sneha Das

In this video interview, we speak with Assistant Professor Sneha Das, a researcher at DTU whose work focuses on trustworthy AI and responsible data driven systems. She discusses the perspective her institute contributes to the project, and how the program can guide policy and public understanding related to AI and alignment.

Exploring AI Advancement at NeurIPS 2025

Cen Lu

The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025), was held from December 2-7 in San Diego. Our alignAI doctoral candidate Candidate Cen Lu attended and presented his poster “Chain-of-Model Learning for Language Model” at the poster session.

Q&A with PI Martijn Willemsen

PI Martijn Willemsen

In this video interview, we speak with Professor Martijn Willemsen, a researcher at Eindhoven University of Technology known for his studies on human decision processes and recommender systems. He talks about the doctoral candidates he works with and the projects they pursue, his reasons for joining alignAI, the perspective his institute adds to the project, and the cooperation he hopes to see among partners.

Q&A with PI Daniel Gatica-Perez

PI Daniel Gatica-Perez

In this video interview, we speak with Professor Daniel Gatica-Perez, Head of Social Computing Group at Idiap and Professor at École Polytechnique Fédérale de Lausanne. He emphasises the importance of a human-centred approach and shares how he sees the alignAI project can help society.

Contact Us

FIll out the form below and we will contact you as soon as possible