Exploring AI, Moral Advice and Human Decision-Making

As part of the TUM Institute for Ethics in Artificial Intelligence (IEAI) Speaker Series, the alignAI Doctoral Network joined a timely session on June 16, 2025, featuring Prof. Eyal Aharoni, Associate Professor of Neuroscience, Philosophy and Psychology at Georgia State University. His talk, titled “Pandora’s Bots? Studies of AI Moral Advice and Their Implications for Human Decision-Making” was moderated by Dr. Franziska Poszler, Research Associate at the IEAI.

The talk explored the rising influence of AI systems, particularly large language models (LLMs), when taking morally significant decisions. While these systems are increasingly used to offer guidance in complex or sensitive contexts, their influence often goes underexamined. Drawing on empirical research, the session asked: Do LLMs demonstrate moral intelligence? And perhaps more importantly, do people perceive them as morally intelligent?

Prof. Aharoni presented results from a moral Turing Test to explore the second question. Participants were shown scenarios involving moral or conventional transgressions, each followed by two responses – one written by a human, the other by an LLM. Participants were asked to judge which response was more accurate and later to guess which came from the AI. In many cases, participants failed to identify the AI correctly. Evidence presented during the session also revealed that users frequently rated AI-generated moral evaluations as more accurate than human ones. This tendency to over-trust AI raises concerns about how quickly people may accept or act on moral advice without sufficient reflection when systems appear calm, coherent or confident.

The discussion also introduced the AI performance paradox: as AI systems improve, human performance in decision-making may decline because of growing reliance. Even when AI performs well on average, uncritical reliance can weaken human agency and critical thinking.

The speaker emphasised the need to move beyond regulation and focus on human-centred design and approaches that encourage reflective, informed engagement instead of passive dependence. Design was presented as a practical tool for fostering ethical alignment that extends the conversation beyond compliance.

alignAI Doctoral Candidates Gökçe Şahin, Mohaned Bahr, Julia Li and Simay Toplu attended the session and reflected on the implications for their own research into responsible AI development in contexts where trust, values and user understanding play central roles.

We thank the organisers and Prof. Aharoni for sharing these timely insights and for creating space for critical reflection on the ethical complexities of AI in high-stakes domains. The recording of the event can be found here: https://youtu.be/Iu9rIKo_4hk

Contact Us

FIll out the form below and we will contact you as soon as possible