What inspired you to join the alignAI project?
Coming from a graphic design background, my master’s studies in interaction design opened my eyes to the possibilities of human-machine collaboration and to what computers can bring to the creative field in terms of serendipity, iteration and data processing. But alongside the excitement about these technological potentials came a sense of concern. The more I learned about computational tools and AI, the more I noticed the ethical implications that remain undefined or unregulated. Conversations around AI often feel ungrounded, with no clear structure or standards to guide responsible development. I joined alignAI to be part of shaping a more thoughtful, inclusive and accountable vision of future technologies and their role in creative and educational contexts.
What is the focus of your research within alignAI?
My research focuses on evaluating AI tools in educational settings—both existing ones and those we’re developing specifically for teaching and learning. I’m looking closely at how current generative AI applications are being used (or not effectively used) in design education. Are they genuinely inspiring or helpful? Do they support learning or creative exploration? The insights I will gather will guide the development of a new educational tool by another doctoral candidate on the project. Later, we’ll test and iterate on that tool based on our research findings.
What excites you most about working at the intersection of AI and education?
Compared to other AI applications, universities offer a unique space that encourages exploration, imagination and critical thinking. In this setting, the tools we create don’t have to be strictly practical or productivity-driven; they can be speculative, playful and even a bit wild. That might sound odd, but I see it as an opportunity to explore creativity with creativity—alongside machines. With a balance of experimentation and responsibility, I hope we can uncover values and perspectives that go beyond efficiency and automation and build a framework upholding these values.
How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?
As AI becomes more integrated into everyday life, its impact needs to be evaluated from multiple angles. No single field can define what responsible AI looks like. Interdisciplinary collaboration brings together diverse forms of expertise and ways of thinking to surface new questions and overlooked concerns. In my own project, I’m especially excited to collaborate with researchers from TUM, who work with tangible tools and have a strong focus on constructivist education. Working with people from different disciplines helps me expand my own imagination, challenge my assumptions and biases, and better understand the nuances of how technology is used and interpreted.
If you had to explain your research to a friend outside academia, how would you describe it?
I study how AI is being used in education and how students -especially in design- can learn both about AI and with AI in more meaningful ways. I focus less on the technical or scientific side of AI and more on its critical and creative dimensions. I’m especially interested in how AI can support learning, imagination and critical thinking as a designer, rather than just automation or productivity.
Where can people follow your work?
You can follow me on Instagram for digital experiments, LinkedIn or my website.