What inspired you to join the alignAI project?
During my master’s studies in philosophy, I focused on the reasoning capacities of large language models (LLMs) — especially how we can assess and improve their common sense reasoning. This experience deepened my interest in the explainability of AI systems and raised philosophical questions about how reasoning actually works in these models. I was drawn to the alignAI project because it brings together both technical and philosophical perspectives and gives me the opportunity to contribute to a framework that helps us better understand, guide and make the reasoning processes of AI systems more transparent.
What is the focus of your research within alignAI?
My research focuses on improving the reasoning capacities of LLMs. I’m especially interested in methods like Chain-of-Thought Prompting and other prompting techniques that aim to reveal and shape how models reason step by step. A central part of my work is exploring whether these systems can monitor and revise their own reasoning — a process known as Meta-Reasoning. Alongside these technical tools, I also work on developing a philosophical framework to better understand and evaluate what it means for an AI system to reason, not just as a process of computation, but as something that begins to resemble thinking.
What excites you most about working at the intersection of AI and philosophy?
I’ve always been fascinated by how the mind works: how thoughts emerge, how understanding happens and what it really means to reason. Since the beginning of my academic journey, philosophy has been my way of exploring these questions. Over time, I started to see AI systems as a kind of philosophical laboratory. They give us a chance to test our ideas about thinking and reasoning in ways we couldn’t before. What excites me most is that this work brings us closer to the big, mysterious questions about the mind that once seemed unreachable. Now, we have tools to explore them more concretely and critically.
How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?
I believe the future of AI depends on strong interdisciplinary collaboration. We can’t fully understand or guide intelligent systems by looking at them from a single perspective. Philosophy helps us ask the right questions, cognitive science gives us insight into how minds work and computer science builds the systems that make these ideas real. In my own work, I constantly move between these fields. That back-and-forth is not just enriching, it’s necessary if we want AI systems to be not only powerful, but also understandable, responsible and aligned with human values.
If you had to explain your research to a friend outside academia, how would you describe it?
You know how sometimes AI gives surprisingly smart answers and other times completely misses the point? I study why that happens. Basically, I’m trying to figure out how these systems “reason”, and whether we can help them do it better. I use ideas from philosophy to explore which kinds of prompts and concepts actually help the model think more clearly. It feels a bit like testing different recipes to see what can spark better reasoning. My job is basically hanging out with LLMs and trying to make them a little wiser.
Where can people follow your work?
You can find me on LinkedIn and ResearchGate, and feel free to reach out via email at z.kabadere@tue.nl.
Image Attribution
Enhanced by: Krea AI
Date: 15/07/2025