“Fair Enough?” – Who Wins, Who Loses and Why AI Needs to do Better Than Just Working for Most

When people think about AI, they often imagine objectivity. They imagine algorithms that soberly follow data and numbers, unaffected by personal opinions, emotions, or prejudice. But here’s the problem: AI systems don’t fall out of the sky. Humans develop them, they’re trained on human-generated data, shaped by human choices, and deployed in human contexts – […]
Opening the Black Box – How AI Explainability Is Being Approached

In our last blog post, we established what explainability is and why it matters. Now comes the harder part: Figuring out how to make it work. Like anything related to AI, the answer is not simple and there is no one-size-fits-all solution. Instead, there is a broad range of methods, processes, and design strategies that […]
Can You Trust What You Don’t Understand? Why AI Needs to Explain Itself!

Most of the time, we don’t question the systems around us until they fail. When planes crash, treatments go wrong, or loans are denied, we ask, “What happened, why, and who’s responsible?” As the development of AI is rapidly progressing and systems take on more and more power in deciding what we see, what we […]
Making LLM Alignment Work – The Need for Collaborative Research

Ensuring that LLMs align with human values is not an easy task. Alignment is particularly challenging because human values are not static, universal, or easily quantifiable and codifiable. What is considered ethical, fair, or appropriate varies significantly across cultures, political ideologies, and social contexts, making it difficult to establish a one-size-fits-all alignment approach (Liu et […]
Why do LLMs Need Ethical Alignment? – The Risks of Misaligned AI.

“As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice’: we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or […]