Beyond Accuracy: Why “Being Right” Isn’t Enough for Human-Centred AI

Imagine the following two scenarios. A teacher asks an AI to review a student’s essay. Its feedback is accurate, the grammar is fixed and the facts are straight, yet the student still feels stuck. The student has no clue what to try next. A software team asks an AI to flag bugs. The model points to real issues, but the way it explains them leaves new engineers more confused than confident. In both cases, the tool passes the test and fails a person.
Accuracy matters, but it’s not the whole story. If we chase only the right answer, we ship systems that look strong in demos and lose people in real use.
Beyond the Hype: What Actually Makes AI Design Different

Just a few years ago, conversation flow design was at the heart of chatbot research (Cho et al., 2025). Designers developed detailed guidelines to structure dialogues, crafted messaging frameworks for seamless interactions, and carefully designed output messages to align with chatbot personas. Then as transformer-based large language models (LLMs) arrived, the rigid and predefined conversation structures that worked for rule-based systems couldn’t accommodate LLMs’ dynamic, context-aware response. Research priorities shifted from designing fixed dialogue trees to exploring prompt engineering and interaction patterns (Cho et al., 2025). The expertise built over years around conversation flow design was fundamental but it needed rapid reframing; designers had to rethink how to guide conversations without prescribing every turn, how to maintain coherence without rigid structures, and how to evaluate interactions that varied with each user (Subramonyam et al., 2024). This narrative about change isn’t just applicable to chatbots. It’s applicable to Generative AI’s (GenAI) unique temporal challenge and it raises a critical question for anyone designing with or for AI – are our design methods keeping up, or do we need new ones?
The Moral Panic Around AI Mental Health

Trigger Warning/Disclaimer: This blog post mentions suicide.
Governments, startup founders, academics, mental health professionals and others wrestle over who gets to define the future of AI mental health care.
Amidst a lack of regulatory oversight regarding AI-based mental health chatbots, some states in the US have taken steps to ban these systems in order to protect the public. Full bans are in place in Illinois and Nevada, and although Utah has not banned it outright, it still imposes strong restrictions and requirements around transparency, advertising, data use and human professional involvement. Bans as a political strategy and policy risk unintended consequences on a population-wide scale (Oliver et al., 2019).
Safety Guardrails for AI: How LLMs Learn to Stay Safe

Large language models (LLMs) are trained on large amounts of text from the internet, books, forums and other sources in a process called pre-training. This gives them great versatility, but also comes with a hidden challenge: human language data contains biases, misinformation and unsafe patterns, such as hate speech, toxic or discriminatory content. When models learn from such data, they not only gain useful knowledge but also inherit these problems. On top of this, LLMs tend to be statistically overconfident (Guo et al., 2017; Minderer et al., 2021), meaning they assign higher probabilities to their predictions, due to the way that they interpret data (Xu et al., 2024). They often present information with certainty, even when the output is false. This combination of biased training data and overconfidence can lead to hallucinations, biased answers or unsafe outputs, such as toxic content or instructions for harmful behavior.
How is AI Changing the Creative Process? AI as the Co-creator Nowadays

Creativity is often considered as an “intuition” or “talent” and can’t be easily interpreted in a logical way (Wu et al. 2021). The creative industries often refer to graphic design, film, music, video games, fashion, advertising, media or entertainment industries (Howkins 2002), related to the extraordinary thinking by supreme creative individuals (Weisberg 2006). However, creativity actually lies in all creative activities, from the arts to science, from everyday life to industry production. Today, creativity is considered to be a crucial competency (Binkley et al. 2012). Boden (2004), who pioneered the field of philosophy of cognitive science, offers the definition “Creativity is the ability to come up with ideas or artefacts that are new, surprising and valuable”. With the help of language, people used the creative process in art and technology, making creativity “one of the most striking features of the human species”, since at least 40,000 years ago (Carruthers 2002, p. 226). Creativity in today’s sense is at the heart of human endeavour, shaping various fields including education, art and healthcare (Esling and Devis 2020; Farina et al. 2024; Tredinnick and Laybats 2023).
Meta and Mind: Tracing the Journey of Thinking about Thinking

For as long as we have written history, humans have been fascinated by the idea of thinking about thinking. The ancient Greeks saw self-reflection as a path to wisdom: Socrates urged his students to “know thyself”, while Aristotle suggested that the mind could even grasp its own activity. Centuries later, philosophers and logicians took this further, asking whether knowing something also means knowing that you know it. In the 1960s, Jaakko Hintikka captured this in a famous principle of logic: if an agent knows a fact, it should also know that it knows it. Fast forward to today, and this same idea has found new life in artificial intelligence, where researchers explore how machines might be designed not just to think, but to reflect on their own thinking.
Creativity, Style and the Flattening Threat in Large Language Models

The debate on creativity has intensified with the rise of generative AI, especially of large language models (LLMs). Recent research shows that these systems can produce work that competes with, and in some cases exceeds, human creativity (Guzik et al., 2023; Bohren et al., 2024). At the same time, their use brings serious concerns about value, authenticity, and the long-term safeguarding of human creative practices (Mei et al., 2025; Messer, 2024). This tension highlights what might be called the “flattening threat”: there is a perceived risk that even as LLMs make it easier to generate ideas and boost productivity, they could also diminish the diversity, style and authenticity that enrich human creativity.
The Myth of Neutral Participation: Why Good Intentions Aren’t Enough in AI Design

The field of AI is experiencing a participatory turn (Delgado et al., 2023). From tech companies to researchers, there is growing recognition that AI design and development should not happen in isolation from the people it affects. Regardless whether AI systems are designed for mental health, education or journalism, they need input from communities who deeply understand these domains. Interdisciplinary collaboration has been assuming a more pivotal role, bringing together computer scientists, researchers, ethicists and community members to create more aligned and responsible AI systems. This shift is certainly representative of progress.
AI-ready Newsrooms: Why the Online News Industry is at the Forefront of the LLMs Revolution

When thinking about generative AI and its disruptive impact, text generation often comes up as the most representative example of this new chapter in technological advancement. Large language models (LLMs) are rapidly transforming sectors that have at their core text generation tasks such as writing, drafting or summarisation, and the online news industry has been challenged in adapting to these new tools since GPT (generative pre-trained transformer) models became known to the mass public in late 2022.
RAG: Teaching Large Language Models to Use a Library

Imagine you would like to write an essay about quantum computing, but your knowledge about quantum computing comes only from your high school textbooks. In this case, you know how to write good papers, but your knowledge is limited. Now imagine if you could access any library in the world while writing. That would make your work easier, and it’s essentially what retrieval-augmented generation (RAG) does for large language models (LLMs).