Navigating Truth and Accountability in the Age of AI Information

Journalism, as one of the main driving forces behind information flows in modern societies, has traditionally promoted itself as the medium of truth. The credibility of news institutions and the legitimacy of journalism as a profession have long rested on their ability to produce, verify and disseminate information grounded in factual accuracy and editorial integrity. Yet, in the era of artificial intelligence, these epistemic foundations are being profoundly challenged: generative AI does not only replicate or automate journalistic processes, but also potentially transforms them. The generative potential of AI introduces a new layer of uncertainty to news production, as tools that are neither human nor conscious are now producing texts with the marks of human authorship, originality and even moral voice.
AI-nxiety and AI-gency: Young Adults Navigating Generative AI

As part of the TUM Institute for Ethics in Artificial Intelligence (IEAI) Speaker Series, a December 2025 session focused on young adults and generative AI. The talk, titled “AI-nxiety & AI-gency: Young Adults Navigating Generative AI” was delivered by Dr. Jaimee Stuart, Senior Researcher and Team Lead at United Nations University Macau.
Beyond Accuracy: Why “Being Right” Isn’t Enough for Human-Centred AI

Imagine the following two scenarios. A teacher asks an AI to review a student’s essay. Its feedback is accurate, the grammar is fixed and the facts are straight, yet the student still feels stuck. The student has no clue what to try next. A software team asks an AI to flag bugs. The model points to real issues, but the way it explains them leaves new engineers more confused than confident. In both cases, the tool passes the test and fails a person.
Accuracy matters, but it’s not the whole story. If we chase only the right answer, we ship systems that look strong in demos and lose people in real use.
Beyond the Hype: What Actually Makes AI Design Different

Just a few years ago, conversation flow design was at the heart of chatbot research (Cho et al., 2025). Designers developed detailed guidelines to structure dialogues, crafted messaging frameworks for seamless interactions, and carefully designed output messages to align with chatbot personas. Then as transformer-based large language models (LLMs) arrived, the rigid and predefined conversation structures that worked for rule-based systems couldn’t accommodate LLMs’ dynamic, context-aware response. Research priorities shifted from designing fixed dialogue trees to exploring prompt engineering and interaction patterns (Cho et al., 2025). The expertise built over years around conversation flow design was fundamental but it needed rapid reframing; designers had to rethink how to guide conversations without prescribing every turn, how to maintain coherence without rigid structures, and how to evaluate interactions that varied with each user (Subramonyam et al., 2024). This narrative about change isn’t just applicable to chatbots. It’s applicable to Generative AI’s (GenAI) unique temporal challenge and it raises a critical question for anyone designing with or for AI – are our design methods keeping up, or do we need new ones?
The Moral Panic Around AI Mental Health

Trigger Warning/Disclaimer: This blog post mentions suicide.
Governments, startup founders, academics, mental health professionals and others wrestle over who gets to define the future of AI mental health care.
Amidst a lack of regulatory oversight regarding AI-based mental health chatbots, some states in the US have taken steps to ban these systems in order to protect the public. Full bans are in place in Illinois and Nevada, and although Utah has not banned it outright, it still imposes strong restrictions and requirements around transparency, advertising, data use and human professional involvement. Bans as a political strategy and policy risk unintended consequences on a population-wide scale (Oliver et al., 2019).
Can You Trust the Machine? alignAI Doctoral Candidates Hold Workshop at Samuel-Heinicke-Fachoberschule

On November 21st, alignAI doctoral candidates Julia Li and Simay Toplu held an interactive workshop with 32 students at Samuel-Heinicke-Fachoberschule, organized together with the Europe Direct Network. The session introduced students to the everyday presence of AI systems and encouraged them to reflect on the risks, benefits and responsible use of AI in real-life situations in the EU and beyond.
Q&A with PI Avigdor Gal

In this video interview, we speak with Professor Avigdor Gal, Benjamin and Florence Free Chaired Professor of Data Science at Technion – Israel Institute of Technology and one of the Principal Investigators in the alignAI project. He explores his role with alignAI, how his research on data integration, uncertain data and machine learning strengthens our network and his vision for how the AI ecosystem might evolve in the future.
Safety Guardrails for AI: How LLMs Learn to Stay Safe

Large language models (LLMs) are trained on large amounts of text from the internet, books, forums and other sources in a process called pre-training. This gives them great versatility, but also comes with a hidden challenge: human language data contains biases, misinformation and unsafe patterns, such as hate speech, toxic or discriminatory content. When models learn from such data, they not only gain useful knowledge but also inherit these problems. On top of this, LLMs tend to be statistically overconfident (Guo et al., 2017; Minderer et al., 2021), meaning they assign higher probabilities to their predictions, due to the way that they interpret data (Xu et al., 2024). They often present information with certainty, even when the output is false. This combination of biased training data and overconfidence can lead to hallucinations, biased answers or unsafe outputs, such as toxic content or instructions for harmful behavior.
How is AI Changing the Creative Process? AI as the Co-creator Nowadays

Creativity is often considered as an “intuition” or “talent” and can’t be easily interpreted in a logical way (Wu et al. 2021). The creative industries often refer to graphic design, film, music, video games, fashion, advertising, media or entertainment industries (Howkins 2002), related to the extraordinary thinking by supreme creative individuals (Weisberg 2006). However, creativity actually lies in all creative activities, from the arts to science, from everyday life to industry production. Today, creativity is considered to be a crucial competency (Binkley et al. 2012). Boden (2004), who pioneered the field of philosophy of cognitive science, offers the definition “Creativity is the ability to come up with ideas or artefacts that are new, surprising and valuable”. With the help of language, people used the creative process in art and technology, making creativity “one of the most striking features of the human species”, since at least 40,000 years ago (Carruthers 2002, p. 226). Creativity in today’s sense is at the heart of human endeavour, shaping various fields including education, art and healthcare (Esling and Devis 2020; Farina et al. 2024; Tredinnick and Laybats 2023).
Q&A with DC Tuan-Ting Huang

What inspired you to join the alignAI project? Coming from a graphic design background, my master’s studies in interaction design opened my eyes to the possibilities of human-machine collaboration and to what computers can bring to the creative field in terms of serendipity, iteration and data processing. But alongside the excitement about these technological potentials […]