Our alignAI doctoral candidate Eva Paraschou presented her work “Mind the XAI Gap: A Human-centered LLM Framework for Democratising Explainable AI” at the “The 3rd World Conference on eXplainable Artificial Intelligence” held in Istanbul, Turkey (July 9-11, 2025).
Her paper introduces a new framework that uses large language models (LLMs) to create explanations for AI decisions that are both technically accurate and easy to understand for end users. A pre-print is already available on arXiv (Paraschou et al., 2025), with the final version forthcoming in Springer open access.
The “Mind the XAI Gap” Paper
AI often operates like a “black box”: we see inputs and outputs but not how decisions are made. While Explainable AI (XAI) seeks to address this, the most common approaches produce explanations that are too technical for general users.
In their paper, Eva Paraschou and her colleagues propose a framework that aims to provide clear, coherent explanations that are accessible to a wider audience. The framework is based on LLMs that can translate complex AI decisions into understandable language, generating responses containing:
- Simple, easy-to-read explanations for people who aren’t AI experts.
- Detailed technical information for the experts who need it.
To show that the framework is effective, they created a benchmark for correct explanations by analysing over 40 different scenarios involving AI models and data. Then, they selected the explanations for a scenario on human well-being, showing that the framework was capable of producing:
- High-quality explanations: the framework generated explanations that were very similar to the correct, ground-truth explanations they created (strong correlation of 92%).
- User-friendly explanations: in a study with 56 participants, non-experts rated the explanations as much clearer and more helpful, thus more successfully centring the average layperson.
By bridging the gap between technical complexity and human understanding, the proposed framework ensures that a broader audience can understand why AI made a certain decision, not just the experts.
The XAI Conference
Eva Paraschou presented as part of the Human-Centred Explainable AI Special Track, where she had the opportunity to discuss her research with other experts in the field. The conference featured special tracks and presentations on technical methods for creating understandable AI including concept-based, feature importance-based and intrinsically interpretable AI. Sessions also honed in on the practical applications of XAI in fields like healthcare and its role in scientific discovery, additionally addressing the wider implications of XAI, such as ethical considerations, privacy concerns and the challenges of integrating XAI into industry.
Some standout contributions she highlighted included:
- Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities (Domnich et al., 2025)
- Human-Centered Explainable AI: Creating Explanations that Address Stakeholder Needs (Hummel, 2025)
- Reasoning-Grounded Natural Language Explanations for Language Models (Cahlik et al., 2025)
The “Methods for Statistical Evaluation of AI” Summer School
Eva Paraschou had the chance to also present her work as a poster at the “Methods for Statistical Evaluation of AI” summer school in Nyborg, Denmark, from August 25-29, 2025. Along with networking with peers and senior researchers, she attended lectures on fairness, evaluation and safety of AI systems, with specific discussions on PAC-Bayesian analysis, split conformal prediction, probabilistic machine learning and active learning.
References:
Cahlik, V., Alves, R., & Kordik, P. (2025). Reasoning-Grounded Natural Language Explanations for Language Models. arXiv preprint arXiv:2503.11248.
Domnich, M., Veski, R. M., Välja, J., Tulver, K., & Vicente, R. (2025). Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities. arXiv preprint arXiv:2504.13899.
Hummel, A. (2025). Human-Centered Explainable AI: Creating Explanations that Address Stakeholder Needs.
Paraschou, E., Arapakis, I., Yfantidou, S., Macaluso, S., & Vakali, A. (2025). Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI. arXiv preprint arXiv:2506.12240.