Trust Me, I’m an Algorithm – On Trust, Trustworthiness and the Trouble with Both

Imagine this: You’re in a courtroom and have been convicted of a crime. Before announcing the sentence, the judge consults a risk assessment tool – an AI system designed to promote consistency and reduce human bias in sentencing. The algorithm generates a score meant to help the judge weigh rehabilitation potential versus public risk. It’s […]

More Than Just Math – How Fairness is Being Approached in AI

In our last blog post, we unpacked what fairness in AI means, why it matters, and why technical fixes alone can not be enough to solve the deeper social and structural challenges at play. Still, just because fairness isn’t the whole answer doesn’t mean it isn’t a necessary part of the solution. Improving how AI […]

Opening the Black Box – How AI Explainability Is Being Approached

In our last blog post, we established what explainability is and why it matters. Now comes the harder part: Figuring out how to make it work. Like anything related to AI, the answer is not simple and there is no one-size-fits-all solution. Instead, there is a broad range of methods, processes, and design strategies that […]

Can You Trust What You Don’t Understand? Why AI Needs to Explain Itself!

Most of the time, we don’t question the systems around us until they fail. When planes crash, treatments go wrong, or loans are denied, we ask, “What happened, why, and who’s responsible?” As the development of AI is rapidly progressing and systems take on more and more power in deciding what we see, what we […]

Making LLM Alignment Work – The Need for Collaborative Research

Ensuring that LLMs align with human values is not an easy task. Alignment is particularly challenging because human values are not static, universal, or easily quantifiable and codifiable. What is considered ethical, fair, or appropriate varies significantly across cultures, political ideologies, and social contexts, making it difficult to establish a one-size-fits-all alignment approach (Liu et […]

Why do LLMs Need Ethical Alignment? – The Risks of Misaligned AI.

why do LLMs need ethical alignment?

“As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice’: we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or […]

Contact Us

FIll out the form below and we will contact you as soon as possible