The Myth of Neutral Participation

The Myth of Neutral Participation: Why Good Intentions Aren’t Enough in AI Design

The field of AI is experiencing a participatory turn (Delgado et al., 2023). From tech companies to researchers, there is growing recognition that AI design and development should not happen in isolation from the people it affects. Regardless whether AI systems are designed for mental health, education or journalism, they need input from communities who deeply understand these domains. Interdisciplinary collaboration has been assuming a more pivotal role, bringing together computer scientists, researchers, ethicists and community members to create more aligned and responsible AI systems. This shift is certainly representative of progress. 

Yet, recent research reveals a concerning pattern in spite of these well-intentioned efforts. Much of what we call “participation” in AI design may be reproducing the very power imbalances it claims to address (Birhane et al., 2022). The problem is not that we are including the wrong people but that we are not questioning the fundamental structures that determine who gets to participate, when and on whose terms. As Sherry Arnstein observed decades ago: “participation without redistribution of power is an empty and frustrating process for the powerless. It allows the powerholders to claim that all sides were considered, but makes it possible for only some of those sides to benefit. It maintains the status quo (Arnstein, 1969).

What Meaningful Participation is Not

To understand meaningful participation, we must first examine its common distortions. Much of what is considered as participation in AI development can have some of these inherent pitfalls-participation as an afterthought, consultation occurs when community input is solicited only after core decisions about problem framing, design, technical approach and implementation have already been made (Sloane et al., 2022). Communities are asked what they think about predetermined options rather than being involved in defining the problem itself, often leading to community participation at a superficial level. 

Participant washing reframes various forms of labour as community engagement (Sloane et al., Birhane et al., 2022). Data labelling, content moderation and other forms of digital work get repositioned as participatory when in reality they are extractive practices that depend on undercompensated labour, often from Global South communities. Convenient sampling presents itself as inclusive while systematically centring the same voices. Participatory AI projects often end up dominated by educated, tech-savvy participants who are easier to reach and engage with, while claiming to represent broader community interests. Participation for appearance involves going through the motions of community engagement, primarily to legitimise predetermined technological interventions rather than to genuinely redistribute decision-making power (Birhane et al., 2022).

Barriers to Meaningful Participation for AI

AI development presents unique challenges that make meaningful participation particularly difficult to achieve, even when intentions are sincere.

Foundation model constraints reveal a fundamental tension between participation and scale. It becomes practically impossible for impacted communities to meaningfully shape foundation models that are designed to be universally applicable (Suresh et al., 2024). As foundation models are built for broad applicability rather than context specific needs, including situated, local knowledge stemming from meaningful participation can be difficult. Another piece in this puzzle is that foundation model developers currently lack incentives to share control with communities. 

Furthermore, technical approaches like fine-tuning, which seem to offer pathways for community input, present their own limitations; fine-tuned models can inherit the foundational biases of their base systems (Suresh et al., 2024). While researchers are beginning to develop frameworks for more participatory approaches to foundation model development and fine-tuning, there is a lot of work to be done to learn how communities would meaningfully participate in ensuring these models are truly fit for their specific purposes.

In this landscape we also encounter techno-optimistic framing, especially in the context of AI. When researchers start from the assumption that a particular domain is “ripe for AI intervention”, they have already passed on the possibility that communities might conclude the problem needs policy change, resource redistribution or that certain technological interventions shouldn’t exist at all. This dynamic was evident in the research on Danish municipal job placement systems where data scientists began with assumptions about building individual profiling algorithms while caseworkers identified organisational contradictions as the real problem requiring attention (Møller et al., 2020). The mismatch reveals how techno-optimistic framing can persist even within well-intentioned participatory processes, shaping what solutions are considered viable from the outset.

The right to refusal of AI intervention feels out of scope when funding structures, institutional incentives and project timelines are built around delivering technical solutions (Vethman et al., 2025). Even well-designed participatory processes struggle to conclude “don’t build this” without undermining the economic and professional interests that make participation possible.

These structural constraints are reinforced by regulatory gaps: despite rhetorical commitments, the EU AI Act provides limited legal requirements for participation, with key provisions like risk management and rights impact assessments containing no explicit participation mandates (Ullstein et al., 2025.). 

Toward Genuine Participation

Meaningful participation requires acknowledging that communities affected by AI systems possess situated knowledge that makes them experts through experience. Their lived understanding of problems, contexts and potential solutions constitutes a form of expertise that technical credentials cannot replace or override. This shift demands recognising epistemic pluralism-the idea that different ways of knowing are valid and necessary (Klein & D’Ignazio, 2024). Context-sensitive solutions embrace pluralism by building systems flexible enough to work differently in different contexts rather than imposing one-size-fits-all approaches that inevitably fail diverse communities.

Participant representation, involvement and equitable remuneration are key factors in participatory design (Delgado et al., 2023). Appropriate representation means actively centring marginalised voices rather than defaulting to convenient or vocal participants. This requires examining who benefits from current systems and prioritising those who are vulnerable to be negatively impacted by the technology. Multi-stage involvement allows communities to participate from research question development through implementation and evaluation. When participation begins at problem definition rather than solution refinement, the right to refusal becomes structurally possible (Birhane et al., 2022, Vethman et al., 2025). For example, a feminist data science project on feminicide monitoring demonstrates what multi-stage involvement with appropriate representation can look like in practice (Suresh et al., 2022). Rather than starting with technical assumptions, researchers began by co-designing datasets and models directly with activists who collect data about gender-based killings. The project focuses on intersectional identities rather than statistical majorities and involved participants from problem conceptualisation through data collection to model evaluation, including iterative processes that interrogated fundamental framing concepts like who gets included or excluded in definitions of ‘feminicide’. This approach allowed community expertise to shape not just implementation details but foundational questions about what the technology should do and for whom. 

In addition, it is of great value to practice researcher reflexivity that calls for researchers to engage in ongoing examination of positionality, assumptions and decision-making processes throughout projects and actively engage in feedback sessions with the participants about the process itself (Birhane et al., 2022).

Beyond Good Intentions

The participatory turn in AI development represents important progress, but it is just the beginning of a longer transformation. Moving from superficial involvement to genuine power-sharing requires confronting uncomfortable questions about who controls AI development, whose knowledge is counted as legitimate and how current institutional structures can accommodate truly democratic technological governance. This is a call to push interdisciplinary collaboration towards approaches that don’t just include diverse voices, but actually redistribute the power to shape AI’s future. Until participation means communities can fundamentally alter or halt technological development based on their situated expertise, even our most well-intentioned efforts risk becoming elaborate forms of legitimation for unchanged power structures.

References:

Arnstein, S. R. (1969). A ladder of citizen participation. Journal of the American Institute of Planners, 35(4), 216–224. https://doi.org/10.1080/01944366908977225.

Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. ACM, 1–8. https://doi.org/10.1145/3551624.3555290.

Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. ACM, 1–23. https://doi.org/10.1145/3617694.3623261.

Klein, L., & D’Ignazio, C. (2024). Data Feminism for AI. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1989, 100–112. https://doi.org/10.1145/3630106.3658543.

Møller, N. H., Shklovski, I., & Hildebrandt, T. T. (2020). Shifting Concepts of Value. ACM, 1–12. https://doi.org/10.1145/3419249.3420149.

Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2022). Participation Is not a Design Fix for Machine Learning. ACM, 1–6. https://doi.org/10.1145/3551624.3555285.

Suresh, H., Movva, R., Dogan, A. L., Bhargava, R., Cruxen, I., Cuba, A. M., Taurino, G., So, W., & D’Ignazio, C. (2022). Towards Intersectional Feminist and Participatory ML: A case study in Supporting Feminicide Counterdata collection. 2022 ACM Conference on Fairness, Accountability, and Transparency, 667–678. https://doi.org/10.1145/3531146.3533132.

Suresh, H., Tseng, E., Young, M., Gray, M., Pierson, E., & Levy, K. (2024). Participation in the age of foundation models. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1609–1621. https://doi.org/10.1145/3630106.3658992.

Ullstein, C., Jarvers, S., Hohendanner, M., Papakyriakopoulos, O., & Grossklags, J. (Eds.). Participatory AI and the EU AI Act. ACM (in press) https://www.cs.cit.tum.de/fileadmin/w00cfj/ct/papers/2025-AIES-Ullstein.pdf.

Vethman, S., Smit, Q. T. S., Van Liebergen, N. M., & Veenman, C. J. (2025). Fairness Beyond the Algorithmic Frame: Actionable Recommendations for an Intersectional Approach. ACM, 3276–3290. https://doi.org/10.1145/3715275.3732210.

Further reading/watching/listening:

Books & Articles:

Design Justice: Community-Led Practices to Build the Worlds We Need by Sasha Costanza-Chock. Design Justice: Community-Led Practices to Build the Worlds We Need | Books Gateway | MIT Press.

Data Feminism by Catherine D’Ignazio,  Lauren F. Klein. Data Feminism | Books Gateway | MIT Press.

 

Image Attribution

People and Ivory Tower AI by Jamillah Knowles & We and AI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/




Contact Us

FIll out the form below and we will contact you as soon as possible