Imagine an LLM tailor-made for your cultural context such as where you live, the language you speak and the values most important to you. You can depend on it to proofread your emails for tone and social faux pas, respond in everyday, colloquial language and give you relevant recommendations on how to navigate your relationships. You can trust it not to give awkward responses that could put you in a compromised position and to understand the subtle nuances in human interaction vital to help you navigate everyday life.
Different Contexts, Different Preferences
Now, consider how certain values are perceived among individuals in different contexts. For instance, privacy varies in importance relative to other values depending upon individual and group preferences. While some populations are comfortable with AI systems accessing personal information from social media posts, others may want personal information online to be strictly regulated. Given that a user consents and provides access, an LLM can provide user-specific advice drawn from the opinions and ideas shared on private social media platforms. In this situation, privacy of personal data may be less important than the utility gained by an LLM capable of providing customised advice. For others, it may be more important that an LLM does not access their personal information, even if that means a reduction in functionality. However, it may be acceptable for the LLM to take other publicly available content, such as blog posts, published articles and other media which can be linked to the person’s identity.
We can see privacy means something very different to these two groups, despite it being a shared value. Additionally, we can see that AIs and LLMs used by groups with different values must be adjusted accordingly to their preferences in order to be maximally useful and also aligned.
Yet, AIs and LLMs extend beyond considering user preferences. They can also be vehicles of deeper inequities, as well as opportunities to advance fairness and equitability.
The Issue of Bias
This brings us to the conundrum that with efforts to align AI and LLMs with human values come other important questions:
Whose values are we aligning AI systems to? Are they different from our own?
What values do we have in common?
Where do we see harmful bias?
In fact, research has shown that widely used LLMs exhibit a cultural bias towards English-speaking and European countries (Tao et al., 2024). Even with the use of “cultural prompting”, or including information about the desired cultural context in the prompt, Tao et al. (2024) found that GPT leaned towards self-expression values, potentially leading people to unknowingly convey support for concepts like bipartisanship and gender equality. People using these responses in places with low tolerance for gender diversity might inadvertently appear culturally unaware.
Underrepresented Populations and AI Equity
Furthermore, groups such as linguistic minorities, ethnic minorities, Indigenous populations and populations in the Global South may be missing from the conversation about values and ethics in AI (Roche et al., 2023). This means publicly available LLMs are not as useful to certain groups of people as they are to others, potentially excluding groups from technological developments and benefits. It can also mean information could be misused or misappropriated in a way that is offensive and harmful to cultural and linguistic minorities. As AI becomes more ubiquitous in daily life and across sectors, this represents a major imbalance in equality. Building AI that is fair and inclusive means seeking out opportunities to avoid reproducing power dynamics which systematically disadvantage certain groups (Shuford, 2024).
On one hand, creating aligned AI can look like more than building representative datasets and including data from underrepresented populations in training AI. From this, researchers can foster and implement mitigation measures in order to ensure resulting LLMs are developed to be equally helpful for all users.
Data Sovereignty: Resisting an Extractive Approach
Aligned AI can also mean looking at considerations of data sovereignty (Rana, 2025). For instance, many Indigenous communities prioritise data sovereignty and who controls whose data. This reflects a resistance to an extractive approach to data (Rana, 2025). In other words, artificial intelligence systems should not simply take from vulnerable populations without giving back. Providing utility can appear as being more useful, more responsive, generating more culturally relevant information or providing some other type of benefit. Fundamentally, data sovereignty also means not taking what is not willingly given. Encouraging AI practitioners to engage meaningfully with Indigenous perspectives means avoiding perpetuating narratives and practices of social marginalisation and colonisation.
AI Tools for Good – What’s Next?
AI technologies can also be powerful tools for social justice and equity. AI technologies can drive the development of powerful tools that mitigate the harms of marginalisation. For instance, LLMs trained in endangered languages can aid in language revitalisation by making it possible to use AI in these languages in a way that is accessible to learners (Mgimwa & Dash, 2024; Surendra et al., 2024). Another example is AI-powered tools that improve accessibility on personal devices by responding to the unique needs of individuals with disabilities (Buccella, 2023).
By centring AI development around the principles of fairness, transparency and, furthermore, equity and autonomy, we can find our way to being meaningfully involved in AI that is truly global. Whether developers create large and diverse training sets or study specific cultural values, many considerations matter when calibrating AI systems to behave appropriately in given settings. Moreover, there is an opportunity to understand global priorities and create technology benefitial to underserved and historically marginalised populations.
References:
Buccella, A. (2023). “AI for all” is a matter of social justice. AI and Ethics, 3(4), 1143–1152. https://doi.org/10.1007/s43681-022-00222-z.
Hung, P. (2024, July, 17). IEAI Event – “Advancements in Assistive Robotics: A Dialogue for Progress” [Video]. Institute for Ethics in Artificial Intelligence. https://www.youtube.com/watch?v=_LNHsvbIYW4&t=2s.
Mgimwa, P. A., & Dash, S. R. (2024). Reviving Endangered Languages: Exploring AI Technologies for the Preservation of Tanzania’s Hehe Language. In S. S. Mohanty, S. R. Dash, & S. Parida (Eds.), Applying AI-Based Tools and Technologies Towards Revitalization of Indigenous and Endangered Languages (pp. 23–33). Springer Nature. https://doi.org/10.1007/978-981-97-1987-7_2.
Rana, V. (2025). Indigenous Data Sovereignty: A Catalyst for Ethical AI in Business. Business & Society, 64(4), 635–640. https://doi.org/10.1177/00076503241271143.
Roche, C., Wall, P. J., & Lewis, D. (2023). Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI and Ethics, 3(4), 1095–1115. https://doi.org/10.1007/s43681-022-00218-9.
Surendra, S. V., Priyadarshini, S., & Parida, S. (2024). Preservation of Vedda’s Language in Sri Lanka. In S. S. Mohanty, S. R. Dash, & S. Parida (Eds.), Applying AI-Based Tools and Technologies Towards Revitalization of Indigenous and Endangered Languages (pp. 35–44). Springer Nature. https://doi.org/10.1007/978-981-97-1987-7_3.
Tao, Y., Viberg, O., Baker, R. S., & Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. PNAS Nexus, 3(9), page 346. https://doi.org/10.1093/pnasnexus/pgae346
Further reading/watching/listening:
Books & Articles:
Kudina, O., & van de Poel, I. (2024). A sociotechnical system perspective on AI. Minds and Machines, 34(3), 1–9. https://doi.org/10.1007/s11023-024-09680-s2.
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8.
Image Attribution
Generated by: ChatGPT
Date: 23/06/2025
Prompt: “Can you generate an image that represents cultural diversity and artificial intelligence? Please do not include words in the photograph and also make the colours inviting and bright. Can you also emphasize ethics and sharing.”