Q&A with DC Hewei Gao

What inspired you to join the alignAI project?

I’m fascinated by what large language models (LLMs) could mean for education. In many ways, generative AI can help democratise access to knowledge-suddenly, more people can get explanations, feedback and learning support anytime they need it.

But this power also comes with real risks, especially when the users are children and young learners whose critical thinking and values are still developing. If an AI system produces misleading, biased or unsafe content, or if it is used in harmful ways, the impact can scale quickly. Since alignAI is fundamentally about aligning LLMs with human values, it felt like the right place to contribute to making educational AI both beneficial and trustworthy.

What is the focus of your research within alignAI?

My research focuses on two connected goals. First, I work on designing and implementing a framework that meaningfully involves stakeholders such as learners, educators and non-experts in the alignment process, rather than treating alignment as a purely technical afterthought.

Second, I explore how alignment-related concepts such as fairness, transparency, user agency and safety can be operationalised in a more explicit and measurable way. This makes it possible to integrate them into how LLMs are trained and evaluated.

What excites you most about working at the intersection of AI and education?

Education has a lot to gain from generative AI, including personalised explanations, practice support and wider access to learning resources. At the same time, it is one of the most sensitive settings for AI deployment, because learners, especially children, are still developing their reasoning, confidence and values.

What excites me most is the chance to help shape educational AI that supports learning without undermining it. I want to contribute to systems that are not only capable, but also safe, fair and designed to encourage critical thinking.

How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?

I see interdisciplinary collaboration and AI development as reinforcing each other. As AI becomes more capable and more widely used, it increasingly affects domains beyond computer science, including education, psychology, design, ethics and policy. That makes cross-disciplinary work essential.

At the same time, deeper interdisciplinary research helps AI progress in a more grounded way. It brings domain knowledge, clearer definitions of values, better evaluation methods and more realistic constraints into how systems are built. For alignment in particular, no single discipline has all the answers.

If you had to explain your research to a friend outside academia, how would you describe it?

I would say I work on how we can build AI learning assistants that are helpful and responsible. This includes making their behaviour easier to understand and designing ways for real people, such as students and teachers, to influence what the system should do and what it should avoid. My main focus is education, where getting alignment right really matters.

Where can people follow your work?

You can follow me on LinkedIn or reach out via email.

Contact Us

FIll out the form below and we will contact you as soon as possible