Panel Discussion_LLMs From a Socio-technical System Perspective

Panel Discussion: “LLMs From a Socio-technical System Perspective”

As part of the first alignAI Seasonal School, Dr. Caitlin Corrigan, Executive Director at the Institute for Ethics in Artificial Intelligence (IEAI) and alignAI Project Lead, hosted an insightful panel discussion on May 14, 2025. Invited were three researchers from diverse fields, Prof. Ingo Zettler (University of Copenhagen), Dr. Daryna Dementieva (TUM Social Computing Group) and Dalia Yousif Ali (TUM School of Social Sciences and Technology), to discuss some of the most pressing questions in AI and LLM research today. 

Prof. Ingo Zettler is a professor of psychology at the University of Copenhagen and director of the Copenhagen Center for Social Data Science (SODAS). His work focuses on the areas of Behavioural Economics, Educational Psychology, Personality Psychology and Work and Organisational Psychology.

Dr. Daryna Dementieva is a data scientist and postdoctoral researcher in the Social Computing Research Group at the Technical University of Munich. Her research bridges natural language processing and social impact, with a focus on multilingual applications, fake news detection and explainability in AI. She is especially engaged in extending these innovations to underrepresented languages, including Ukrainian.

Dalia Yousif Ali is a doctoral candidate and research associate at the TUM School of Social Sciences and Technology. With a background in software engineering and business, her research explores the societal dimensions of digital transformation, promoting equity, ethical innovation and advancing societal well-being.

Collectively and in collaboration with the audience, the panelists unpacked the complex socio-technical challenges of aligning LLMs with diverse human perspectives.

One of the central insights, and simultaneously one of the greatest challenges discussed, was the plurality of human values. Values are not universal but instead are shaped by culture, language, domain and context. The panelists emphasized the difficulty of reconciling the push for universal principles with the need to incorporate localized nuances. Attempts to hard-code a single set of values risk marginalising entire communities whose experiences and norms fall outside dominant narratives.

Another key challenge is the issue of data – both its quality and its representativeness. The panel highlighted the problem of “garbage in, garbage out”, and noted that models like ChatGPT are primarily trained on English-language, Western-centric data. This leads to poor performance in many other languages and cultural contexts, and undermines efforts to create globally fair and accessible AI systems. 

The conversation also explored what it actually means to build trustworthy AI systems. For some, trustworthiness may mean transparency or explainability; for others, it may be about ensuring fairness or accuracy. Crucially, the panel stressed that trustworthy systems must have the ability to say “I don’t know” rather than generate confident but misleading hallucinations. Benchmarks like the BBQ Benchmark were discussed as tools for surfacing social biases. Rather than relying on one-size-fits-all solutions, the speakers called for interdisciplinary collaboration and the development of group-specific evaluations and benchmarks. Without this, there is a real risk that value alignment remains something that works well only for certain privileged groups, while leaving others behind.

A fundamental question emerged during the discussion: whose values should take precedence? With over 7,000 languages spoken globally, the panel called for greater linguistic accessibility and cultural inclusion in AI development. This also ties into broader concerns about transparency, AI literacy and standard-setting, especially within educational and research institutions. Building systems that reflect a truly global set of values requires making AI not just usable, but genuinely inclusive and context-aware.

The issue of responsibility was also addressed. Who should lead the effort to align LLMs with human values: lawmakers, engineers, corporations or civil society? While there was no consensus on a single answer, the panelists agreed that shared responsibility is essential. In particular, they encouraged computer scientists and technical developers to engage more with researchers, ethicists and communities to focus on real-world impacts, not just performance optimisation.

In response to a question from the audience about the biggest gaps in current alignment research, the panelists noted a lack of clarity about what exactly constitutes “values” in the context of LLMs. They reiterated the importance of defining values inclusively, ensuring systems are accessible across languages and societies, and incorporating transparency and explainability into the core of AI design.

The session concluded on a personal and motivating note for the alignAI DCs. The panelists encouraged especially early-career researchers and doctoral candidates to listen actively to as many different people as possible, collaborate across disciplines and pursue work they genuinely care about, not just what looks good on paper. Lastly, reflecting on their PhD periods, they encouraged the DCs not to neglect the physical world and their personal emotions, wants and needs. 

As LLMs become ever more embedded in everyday life, this panel made one thing abundantly clear: responsible value alignment is not only a technical challenge, it is a societal imperative. Achieving it demands diverse voices, shared accountability and above all, intentionality.

Following the panel, Day 2 of the alignAI Seasonal School continued with a mix of engaging workshops, talks and presentations. Stay tuned via alignAI or follow us on LinkedIn as we continue our mission to shape responsible AI through collaboration, innovation and impact.

Contact Us

FIll out the form below and we will contact you as soon as possible