Q&A with DC Sharvari Bondre

What inspired you to join the alignAI project?

My master’s in information science exposed me to the impact of information systems on society and ethical responsibilities of those who design them. The technology we design not only mirrors but amplifies societal biases, both positive and negative. AlignAI’s focus on harmonising AI with human values is a great opportunity to explore this problem space further and contribute to it meaningfully.

What is the focus of your research within alignAI?

My research will focus on the key enabling methodologies (KEMs) for harmonising AI, specifically large language models (LLMs) with societal values. KEMs bridge the gap between impactful technologies like AI and societal challenges to drive innovation. I will critically study how KEMs can be effective for the alignAI project while developing new implementation approaches and refining existing methodologies. I will work across the three use cases of mental health, education and news consumption.

What excites you most about working at the intersection of AI, design and ethics?

This is a very exciting project overall. I love how I can ground my research in critical theories while using design as a way to create knowledge. I’m also looking forward to working with vulnerable populations and upholding their perspectives in my research and the project at large.

How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or further?

I think it is of utmost importance to have an interdisciplinary dialogue to shape the future of AI. It is known that AI tools are not neutral and embed values of their creators. Having stakeholders from multiple backgrounds, identities and cultures brings in a richer perspective to the process. I’m excited about the collaborative discussions this will foster within the consortium.

If you had to explain your research to a friend outside academia, how would you describe it?

While being careful not to anthropomorphise AI too much, I usually say how we as humans inherit some values from our communities, AI tools also get their values from the knowledge or data they have access to and the people who built them. This makes it very important to build these tools responsibly and ensure they embody appropriate values. I study these very methodologies that help design and build AI tools that are aligned with societal values.

Where can people follow your work?

ou can follow my work on LinkedIn.

Contact Us

FIll out the form below and we will contact you as soon as possible