Author – Sharvari Bondre
Just a few years ago, conversation flow design was at the heart of chatbot research (Cho et al., 2025). Designers developed detailed guidelines to structure dialogues, crafted messaging frameworks for seamless interactions and carefully designed output messages to align with chatbot personas. Then as transformer-based large language models (LLMs) arrived, the rigid and predefined conversation structures that worked for rule-based systems couldn’t accommodate LLMs’ dynamic, context-aware response. Research priorities shifted from designing fixed dialogue trees to exploring prompt engineering and interaction patterns (Cho et al., 2025). The expertise built over years around conversation flow design was fundamental but it needed rapid reframing; designers had to rethink how to guide conversations without prescribing every turn, how to maintain coherence without rigid structures and how to evaluate interactions that varied with each user (Subramonyam et al., 2024). This narrative about change isn’t just applicable to chatbots. It’s applicable to generative AI’s (GenAI) unique temporal challenge and it raises a critical question for anyone designing with or for AI – are our design methods keeping up, or do we need new ones?
What Are KEMs and Why Do They Matter for AI?
Key Enabling Methodologies (KEMs) are frameworks and tools used to tackle complex, socio-technical challenges. They are methods which enable mission-driven innovation and bridge the gap between technological possibilities and societal needs, helping us design not just functional systems but meaningful interventions. KEMs include dozens of methods such as Value Sensitive Design and Design Fictions that provide structured approaches to guide how we develop and deploy new technologies responsibly.
For AI design and deployment, three KEM categories are particularly relevant. “Vision and Imagination” methods help us imagine desirable futures and guide technology development towards societal goals rather than just technical capabilities. “Ethics and Responsibility” frameworks address AI-specific ethical challenges of bias, accountability and opacity which are too often treated as afterthoughts rather than design priorities (Gornet et al., 2024). Finally, “Participation and Co-creation” approaches tackle a fundamental tension – AI affects everyone but is designed by very few, with technical complexity creating what researchers call “barriers to AI democratization” (Costa et al., 2024, Birhane et al., 2022). The question is whether these established methodologies, developed for general socio-technical challenges, are adequate for AI’s distinctive characteristics.
Is Human-AI Interaction Totally Unique to Design?
In their 2020 study, Yang et al. re-examined whether human-AI interaction is uniquely difficult to design compared to other technologies. Their findings were nuanced. While AI does present some uniquely distinct characteristics, most perceived challenges are actually “an amplification of issues that have always plagued the design of complex and uncertain technologies” (Yang et al., 2020).
Many challenges attributed to AI such as unclear user needs, interdisciplinary collaboration difficulties and evaluation challenges are actually common to designing any complex, emerging technology. The difference with AI is broadly that of degree rather than kind: these problems are amplified but not fundamentally new. This suggests that existing design methods may provide sound foundations, if consciously adapted.
What are the Nuances in Human-AI Design?
While most challenges are not necessarily new to the field of design, some aspects about AI technology are uniquely difficult to design for. The combination of autonomy and opacity creates a black box problem. AI systems, particularly those using machine learning, operate in ways that are fundamentally inscrutable, even to their designers. Yang et al. (2020) identified output uncertainty as a unique challenge: designers cannot reliably predict what AI systems will produce, making it difficult to envision use cases, prototype effectively and evaluate outcomes for diverse contexts. This unpredictability is compounded by AI’s learning capabilities as systems that continuously adapt and challenge static design approaches. For example, technologies like self-adapting LLMs can be significantly difficult to envision and design for in contexts of mental health (Zweiger et al., 2025).
There is also a temporal dimension. AI capabilities evolve rapidly, with functionalities and interfaces changing within single years rather than decades (Cho et al., 2025). Design knowledge of AI can be highly temporal, with established guidelines quickly losing relevance as technologies advance. We are not just designing for current systems but for systems whose future behaviour we cannot fully predict.
The Middle Ground: Conscious Adaptation, Not Replacement
This understanding points towards conscious adaptation rather than wholesale replacement of existing KEMs. Research shows that foundational methods like Value Sensitive Design (VSD) were designed for complex, large-scale systems before AI emerged, and many of their core concepts remain applicable (Friedman et al., 2013 and Umbrello & Van De Poel, 2021). The conceptual phase of VSD includes examining what values are in play, for whom and to what extent these values conflict which is helpful for AI design. However, researchers call for more thoughtful adaptations of these approaches for GenAI. For example, value sensitive design can be extended beyond the conceptual phase to the complete lifecycle of AI technology design and deployment. This extension is especially critical because human values are dynamic and can be affected by the technology deployed, making their consideration important throughout the lifecycle rather than only at the design stage (Maathuis et al., 2019, Umbrello & Van De Poel, 2021).
For Vision and Imagination methods like Design Fictions, it can be particularly challenging to envision AI outputs in different scenarios due to a degree of inherent uncertainty of AI technology outputs which calls for strategic workarounds that embrace this uncertainty. Consider a practical example: rather than developing a single ten-year vision that becomes outdated, teams might create parallel visions at different timescales. A one-year vision might focus on current AI capabilities (like fine-tuning chatbot responses), a five-year vision on emerging possibilities (like multimodal interactions) and a ten-year vision on desired societal outcomes (like accessible public services). Each vision includes explicit revision points, acknowledging that AI will evolve. The core principle that technology needs direction and shared purpose remains constant while building in flexibility for rapid evolution.
Ethics and Responsibility frameworks need lifecycle extensions rather than replacement as stakeholders cannot assess potential harms when they cannot understand how systems work. In practice, this might mean: building explainability features from the start (like showing why a loan was denied, not just that it was), requiring human review for high-stakes decisions (such as medical diagnoses or legal sentencing) and implementing monitoring systems that track how AI decisions affect different groups over time. The foundational ethical principles of autonomy, fairness, transparency and inclusion remain stable. What changes is operationalising them for systems that learn and evolve.
While AI teams face communication challenges, these echo familiar issues from other complex technical projects. Costa et al. (2024) identify technical complexity as a key barrier to participation. However, the foundational democratic principle that those affected by technology should shape it remains constant. Participation and co-creation frameworks can help shape this approach. Rather than one-time consultation, AI design demands continuous engagement. For example, a council developing an AI-powered benefits system might run ongoing community workshops where residents learn about AI capabilities whilst sharing their needs, creating feedback loops where the AI is adjusted based on real experiences (Birhane et al., 2022). AI literacy becomes part of the co-design process itself and is not a prerequisite for participation.
A Temporal Framework for Design Knowledge
Throughout this exploration, a pattern emerges: the question isn’t whether our methods work for AI, but rather which aspects of our methods remain stable and which must adapt to AI’s evolution. This suggests a way to categorise design knowledge based on its relationship to change.
We can distinguish three types of knowledge: foundational principles that remain enduring (the democratic imperative that those affected by technology should shape it, the ethical commitments to fairness and transparency), methodological approaches that are adaptable (extending the processes of VSD, the participatory mechanisms of co-design) and implementation details that are inherently temporal (specific prompt engineering techniques, particular evaluation metrics, concrete interface guidelines).
The researchers who developed the chatbot conversation flows weren’t wasting their time. They generated enduring knowledge about how people want to interact with AI, adaptable knowledge about structuring conversational interfaces and temporal knowledge about specific dialogue techniques. Good design methodology should not prioritise finding permanent solutions but be about maintaining meaningful engagement with evolving problems.
While AI presents “unique manifestations” of design challenges, many of the issues are not new. AI doesn’t demand entirely new methodologies. It demands temporal wisdom: knowing what to hold onto, what to adapt, and what to let go.
References:
Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. ACM, 1–8. https://doi.org/10.1145/3551624.3555290.
Cho, H., Seo, J. A., Jung, H., Cheon, E., Seo, W., Lupetti, M. L., Pierce, J., & Dove, G. (2025). Design Knowledge in AI: Navigating Temporality and Continuity. ACM, 61–63. https://doi.org/10.1145/3715668.3734175.
Costa, C. J., Aparicio, M., Aparicio, S., & Aparicio, J. T. (2024). The Democratization of Artificial Intelligence: Theoretical Framework. Applied Sciences, 14(18), 8236. https://doi.org/10.3390/app14188236.
Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In Philosophy of engineering and technology (pp. 55–95). https://doi.org/10.1007/978-94-007-7844-3_4.
Gornet, M., Delarue, S., Boritchev, M., & Viard, T. (2024). Mapping AI ethics: a meso-scale analysis of its charters and manifestos. 2022 ACM Conference on Fairness, Accountability, and Transparency, 127–140. https://doi.org/10.1145/3630106.3658545.
Maathuis, I., Niezen, M., Buitenweg, D., Bongers, I. L., & Van Nieuwenhuizen, C. (2019). Exploring Human Values in the Design of a Web-Based QoL-Instrument for People with Mental Health Problems: A Value Sensitive Design Approach. Science and Engineering Ethics, 26(2), 871–898. https://doi.org/10.1007/s11948-019-00142-y
Subramonyam, H., Pea, R., Pondoc, C., Agrawala, M., & Seifert, C. (2024). Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs. ACM, 1–19. https://doi.org/10.1145/3613904.3642754
Umbrello, S., & Van De Poel, I. (2021). Mapping value sensitive design onto AI for social good principles. AI And Ethics, 1(3), 283–296. https://doi.org/10.1007/s43681-021-00038-3.
Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. ACM. https://doi.org/10.1145/3313831.3376301.
Zweiger, A., Pari, J., Guo, H., Akyürek, E., Kim, Y., & Agrawal, P. (2025, June 12). Self-Adapting Language Models. arXiv.org. https://arxiv.org/abs/2506.10943.
Further reading/watching/listening:
Books & Articles:
Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In Philosophy of engineering and technology (pp. 55–95). https://doi.org/10.1007/978-94-007-7844-3_4.
Toyama, K. (2015). Geek Heresy: Rescuing Social Change from the Cult of Technology. PublicAffairs. Geek heresy : rescuing social change from the cult of technology : Toyama, Kentaro : Free Download, Borrow, and Streaming : Internet Archive
Image Attribution
Ada Jušić & Eleonora Lima (KCL) / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/