Author – Gökçe Şahin
Human culture is unique in how knowledge is transmitted and preserved, as distinguished from all other non-human cultures. This progress is referred to as the “ratchet effect” (Tomasello et al., 1993): a mechanism that faithfully conserves existing knowledge and skills within exchanges while also contributing new innovations. This dual process ensures the accumulation of cultural knowledge. Social learning refers to learning from other social agents through processes such as imitation (Allen, 2012; Bandura, 1962), observation, immediate interaction (Rogoff, 1991, 2003), and testimony (Harris & Koenig, 2006). With advances in technology, humans have begun engaging with novel partners, such as artificial intelligence (AI) agents, particularly large language models (LLMs), in the construction of knowledge, within new learning dynamics. As these tools play an active role in the transmission of information, they indirectly contribute to its accumulation.
The existence of these AI-driven tools underscores the peculiar nature of human intelligence, which is scaffolded by extended mind tools (e.g., note-taking as a practice for enhancing memory performance). Inclusive interactions with LLMs, such as asking for advice, correcting grammatical mistakes, and writing an essay, may facilitate a more intellectually capable society, cognitively scaffolded by AI tools (Clark, 2001). However, when adopting these models to facilitate cognitive processes, meticulous consideration is required. For instance, excessive reliance on AI tools may lead to overreliance and a decline in cognitive skills when these competencies are used less frequently (Kosmyna et al., 2025). In parallel with this notion, some points require further critical attention.
First, while LLMs encompass a vast repertoire of accumulated cultural information, the outputs generated by these tools represent only a narrow subset of cultural diversity, as they are predominantly trained on datasets reflecting mainstream knowledge in Western culture and English-language media sources (but see B. Workshop, 2023). Correspondingly, this situation poses a risk of reproducing biases (e.g., historical prejudices against underrepresented communities) that humanity has not (yet) overcome (Khamassi et al., 2024) and of precipitating cultural homogenization.
Furthermore, the widespread use of LLMs in today’s world points to a transition in human cognition toward a more digital era, in which cognitive processes are exercised by incorporating artificial partners. The practice of learning with LLMs (i.e., not from other social agents) invites discussion of the extent to which such interactions can be conceptualised as social learning. Traditional binary classifications have become increasingly blurred, necessitating more comprehensive and deliberate definitions to better address differences between social and non-social interactions with social and artificial learning partners and to explore the impacts of this shift.
Additionally, LLMs might progressively integrate with robotics and become embodied. Embodied LLMs in social interactions may increase individuals’ attribution of agency to these tools, which, in turn, might distort perceptions of AI (e.g., anthropomorphism, the attribution of mental and biological capacities to nonliving entities; Geiselmann et al., 2023; Piaget, 1973; agentic animism; Okanda et al., 2021). Given that, it is of critical importance to consider the consequences for human cognitive development as people become increasingly reliant on AI agents rather than social agents in their daily routines, which is relatively novel in human cognition. In this sense, as scholars, we have an ethical responsibility to inform users, especially vulnerable groups (e.g., children and adolescents), about potential risks associated with LLM use and to work on solutions to mitigate undesirable impacts on sociocognitive development.
Given that most of the responses generated by LLMs derive from a selective subset of humanity’s knowledge and cultural products through the training data, it is critical to ask to what extent these systems would contribute significant modifications to the ratchet effect, rather than reproducing the existing knowledge in repetitive ways (Gopnik, 2025; Klein, 2023). In parallel, if society increasingly (over)relies on LLMs, what forms of novel advancements might emerge from the accumulated corpus of human knowledge? Nonetheless, although LLM tools are not yet competent enough to produce as much innovation as humans do (Yiu et al., 2014), they might still be able to scaffold human agents in the construction of novel knowledge by serving as rich, accessible repositories of information, thereby functioning as effective learning tools. This underscores the importance of designing LLM agents for creative tasks (Perez et al., 2024) that can produce greater variations, enabling them to contribute to the modifications pursued by humans across cultural evolution, while preserving cultural heritage and accumulated knowledge.
In summary, LLMs represent a critical transition, within a remarkably short period of time in human history, toward a more digital form of multimodal interaction between humans and artificial systems. Therefore, we need a better understanding of their alignment with human needs and preferences as these tools become increasingly integrated into everyday life, either as purely transactional tools or communication partners. To design and improve more user-centred AI agents, we need to adopt a holistic perspective from developmental science to centre human cognition and its developmental needs, thereby enabling better, ethical navigation of interactions between organic and artificial intelligence.
References
Allen, J. W. P. (2012). Imitation situations: Learning to use others as a resource for further activity. [Doctoral dissertation, Lehigh University].
Bandura, A. (1962). Social learning through imitation. In M. R. Jones (Ed.), Nebraska Symposium on Motivation (pp. 211–274). Univer. Nebraska Press.
Clark, A. (2001). Natural-Born Cyborgs?. In: Beynon, M., Nehaniv, C.L., Dautenhahn, K. (eds) Cognitive Technology: Instruments of Mind. CT 2001. Lecture Notes in Computer Science, vol 2117. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44617-6_2.
Geiselmann, R., Tsourgianni, A., Deroy, O., & Harris, L. T. (2023). Interacting with agents without a mind: The case for artificial agents. Current Opinion in Behavioral Sciences, 51, Article 101282. https://doi.org/10.1016/j.cobeha.2023.101282.
Harris, P. L. & Koenig, M. A. (2006), Trust in testimony: How children learn about science and religion. Child Development, 77, 505-524. https://doi.org/10.1111/j.1467-8624.2006.00886.x.
Inner Cosmos with David Eagleman. (2025, May 19). What if AI is not actually intelligent? (Alison Gopnik) [Video]. Youtube. https://www.youtube.com/watch?v=hTb2Q2AE7nA&t=3823s.
Jan Engelmann (ResearchGate). Personal Meeting Notes (2025, July 3).
Khamassi, M., Nahon, M. & Chatila, R. (2024). Strong and weak alignment of large language models with human values. Scientific Reports 14, 19399. https://doi.org/10.1038/s41598-024-70031-3.
Klein, N. (2023). AI machines aren’t ‘hallucinating’ but their makers are. Guardian 8, 2023.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A.V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing tasks [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2506.08872.
Okanda, M., Taniguchi, K., Wang, Y., & Itakura, S. (2021). Preschoolers’ and adults’ animism tendencies toward a humanoid robot. Computers in Human Behavior, 118, 106688.
Perez, J., L’eger, C., Tellez, M.O., Foulon, C., Dussauld, J., Oudeyer, P., & Moulin-Frier, C. (2024). Cultural evolution in populations of Large Language Models. ArXiv, abs/2403.08882.
Piaget, J. (1973). The child’s conception of the world. Transl. by Joan and Andrew Tomlinson. Paladin.
Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context. Oxford University Press.
Rogoff, B. (2003). The cultural nature of human development. Oxford University Press.
Tomasello, M. (2016), Cultural Learning Redux. Child Development, 87, 643-653. https://doi.org/10.1111/cdev.12499.
Tomasello, M., Kruger, A. C., & Ratner, H. H. (1993). Cultural learning. Behavioral and Brain Sciences, 16(3), 495–552. https://doi.org/10.1017/S0140525X0003123X.
Workshop, B., Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ilić, S., … & Bari, M. S. (2022). Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
Yiu, E., Kosoy, E., & Gopnik, A. (2023). Transmission versus truth, imitation versus innovation: What children can do that large language and language-and-vision models cannot (Yet). Perspectives on Psychological Science, 19(5), 874-883. https://doi.org/10.1177/17456916231201401.
Further Reading/Watching/Listening:
Readings:
Danovitch, J. H., & Alzahabi, R. (2013). Children Show Selective Trust in Technological Informants. Journal of Cognition and Development, 14(3), 499–513. https://doi.org/10.1080/15248372.2012.689391.
Farrell, H., Gopnik, A., Shalizi, C., & Evans, J. (2025). Large AI models are cultural and social technologies. Science (New York, N.Y.), 387(6739), 1153–1156. https://doi.org/10.1126/science.adt9819.
Videos:
Yuval Noah Harari (2025, June 21). AI and human evolution. [Video]. Youtube. https://www.youtube.com/watch?v=jt3Ul3rPXaE.
Image Attribution
Generated by: Better Images of AI
Date: 2025
Prompt: “Repeating image of four skulls with increasing doubling, blurring, ghosting and pixelation.”