More Than Just a Chatbot? – Why We Keep Treating AI Like a Person, and Why That Might be a Problem!

Have you ever wished ChatGPT a good morning? Asked it to “please” do something for you? Or thanked it when it did? Now ask yourself why. Maybe you wanted to be polite? Maybe it was a reflex? Or maybe you expected it to provide better results if you were friendly? As AI systems (Large Language […]
Trust Me, I’m an Algorithm – On Trust, Trustworthiness and the Trouble with Both

Imagine this: You’re in a courtroom and have been convicted of a crime. Before announcing the sentence, the judge consults a risk assessment tool – an AI system designed to promote consistency and reduce human bias in sentencing. The algorithm generates a score meant to help the judge weigh rehabilitation potential versus public risk. It’s […]
More Than Just Math – How Fairness is Being Approached in AI

In our last blog post, we unpacked what fairness in AI means, why it matters, and why technical fixes alone can not be enough to solve the deeper social and structural challenges at play. Still, just because fairness isn’t the whole answer doesn’t mean it isn’t a necessary part of the solution. Improving how AI […]
“Fair Enough?” – Who Wins, Who Loses and Why AI Needs to do Better Than Just Working for Most

When people think about AI, they often imagine objectivity. They imagine algorithms that soberly follow data and numbers, unaffected by personal opinions, emotions, or prejudice. But here’s the problem: AI systems don’t fall out of the sky. Humans develop them, they’re trained on human-generated data, shaped by human choices, and deployed in human contexts – […]
Opening the Black Box – How AI Explainability Is Being Approached

In our last blog post, we established what explainability is and why it matters. Now comes the harder part: Figuring out how to make it work. Like anything related to AI, the answer is not simple and there is no one-size-fits-all solution. Instead, there is a broad range of methods, processes, and design strategies that […]
Can You Trust What You Don’t Understand? Why AI Needs to Explain Itself!

Most of the time, we don’t question the systems around us until they fail. When planes crash, treatments go wrong, or loans are denied, we ask, “What happened, why, and who’s responsible?” As the development of AI is rapidly progressing and systems take on more and more power in deciding what we see, what we […]
Making LLM Alignment Work – The Need for Collaborative Research

Ensuring that LLMs align with human values is not an easy task. Alignment is particularly challenging because human values are not static, universal, or easily quantifiable and codifiable. What is considered ethical, fair, or appropriate varies significantly across cultures, political ideologies, and social contexts, making it difficult to establish a one-size-fits-all alignment approach (Liu et […]
Why do LLMs Need Ethical Alignment? – The Risks of Misaligned AI.

“As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice’: we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or […]