Author – Mohaned Bahr
AI is reshaping not only our social practices but also the foundations of regulatory thinking. The transformative power of AI has compelled regulators to adopt a regulatory learning process, shifting from static legal doctrine to an adaptive, learning-driven regulatory approach (Hadfield & Clark, 2023). This shift is driven by both the emergent challenges of AI and the motivation to devise laws that enable AI innovation while protecting against its potential risks (Smuha, 2019). As a result, we present some doctrine examples to argue that AI does not merely challenge existing legal rules but disrupts the obsolete assumptions underlying traditional regulations, making regulatory learning a structural necessity rather than a policy choice.
Why is AI Challenging Traditional Regulatory Approaches?
AI’s emergent challenges have proven traditional regulatory approaches to be limited in regulating all aspects of this technology, necessitating a parallel shift in response to its rapid advancements. While the challenges are numerous, three closely illustrative scenarios have been selected to explain why traditional regulations struggle with the challenges posed by AI. Firstly, attributing traditional criminal liability relies on linking intent (mens rea) to conduct (actus reus), a logic that presumes human agency and traceable intention (Stasi, 2021). However, this approach is contested in the realm of generative AI and LLMs for several structural reasons. We can draw on examples of legally challenging cases, where chatbots nudged mentally vulnerable subjects to inflict self-harm in several instances. The legal uncertainty in these cases was attributed to the opacity of these chatbots’ internal working systems, algorithms, and the impossibility of tracing criminal intent or attributing liability to a specific developer or actor, based on outputs that may diverge from their developers’ intentions or design specifications (Cath, 2018; Kashmir Hill, 2025; Sue Matorin, 2025).
Another example that further illustrates this issue is the unbridled implications of AI, which could potentially infringe upon fundamental rights, particularly the right to privacy (OECD, 2019). AI systems rely on data, which may be scraped indiscriminately from public internet platforms or derived from users’ inputs, for the purpose of its self-training and fine-tuning processes. Such practices risk eroding public trust in AI and producing economic consequences for developers (Perc et al., 2019). This dynamic strains consent-based and sector-specific data protection regimes, which were not designed for continuous, large-scale AI training and reuse. The Cambridge Analytica scandal remains a poignant example of this privacy infringement incident, underscoring the need to enact regulations in response to such occurrences. Additionally, AI encompasses a range of public and private sector applications that intersect various laws and regulatory areas. This ubiquity presents an additional level of regulatory challenge, as it leads to regulatory confusion and uncertainty, hinders enforceability and renders laws as mere symbolic ones (Smuha, 2019).
Regulatory Learning as the Response to AI’s Governance Challenges
Against this backdrop and in light of the above-mentioned examples, regulators have recognised the ineffectiveness of their traditional legal approach and decided to adopt a new approach to regulation, one that is adaptive to technological advances (Gasser & Palfrey, 2025).
Regulators, in order to reorient their regulatory approach, had to employ the regulatory learning process. The first crucial step in conducting this process is to embrace the power of interdisciplinary work as AI infiltrates all sectors horizontally. Consequently, collaboration with economists, AI technical experts, industrialists, psychologists and the entire AI ecosystem is necessary to develop technical expertise and understand the dynamics of the AI industry (Cath, 2018; Julia Black; Andrew Murray, n.d.; Smuha, 2019).
One of the benign outcomes of disruptive technologies is the ease of information flow across jurisdictions, which has increased the efficacy of the regulatory learning process and accelerated its adaptability. In addition, the regulatory learning process has become more prevalent since regulators around the globe have started to confront similar regulatory issues and concerns, and ask similar questions. Hence, drawing lessons from other jurisdictions and analysing their AI laws became the most practical modality to ensure the production of effective laws that achieve the sought aims of regulations (Bennett, 1991; Dolowitz & Marsh, 2000).
For instance, many regulators in various parts of the world have begun to model the EU AI Act in their regulatory learning process. By way of illustration, California, a US state well known for its regulatory influence in the US context, has recently enacted two laws, SB-53 and SB-243, which are modeled to some extent on the EU AI Act’s regulatory approach. The regulatory initiative in California has decided to rebel against the hands-off regulation approach adopted by the US administration (Hine & Floridi, 2024), which is considered a remarkable, evidence-based example of the regulatory learning process and the adaptation of the EU AI Act into the legal context of California. Another example of a cluster of jurisdictions that modelled their laws on partial to the EU AI act or were either inspired by its regulatory approach, such as: Brazil and its Bill No 2338/2023; Canada and its Artificial Intelligence and Data Act (AIDA) and South Korea, which emulated the same EU AI Act risk and transparency approach in its new AI Basic Act (Anda Bologa, 2025). As a result, the EU AI Act has emerged as a central reference point in global regulatory learning, offering a structured framework that other jurisdictions selectively adapt rather than replicate wholesale.
That being said, regulatory learning is becoming an integral part of the regulatory toolbox, enabling regulators to update their tools and adapt to the rapid pace of AI advances (Ahern, 2025).
Conclusion
If AI governance is to remain effective, regulatory systems must learn as dynamically as the technologies they seek to govern. The central challenge ahead is therefore not whether regulators should learn, but how regulatory learning can be institutionalised without sacrificing democratic accountability, legal certainty and fundamental rights.
References:
Ahern, D. (2025). The New Anticipatory Governance Culture for Innovation: Regulatory Foresight, Regulatory Experimentation and Regulatory Learning. https://doi.org/10.48550/ARXIV.2501.05921.
Anda Bologa. (2025, April 23). Burying the Brussels Effect? AI Act Inspires Few Copycats [https://cepa.org/article/burying-the-brussels-effect-ai-act-inspires-few-copycats/?utm]. CEPA Center for European Policy Analysis. https://cepa.org/article/burying-the-brussels-effect-ai-act-inspires-few-copycats/?utm.
Bennett, C. J. (1991). What Is Policy Convergence and What Causes It? British Journal of Political Science, 21(2), 215–233. https://doi.org/10.1017/S0007123400006116.
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080.
Dolowitz, D. P., & Marsh, D. (2000). Learning from Abroad: The Role of Policy Transfer in Contemporary Policy‐Making. Governance, 13(1), 5–23. https://doi.org/10.1111/0952-1895.00121.
Gasser, U., & Palfrey, J. G. (2025). Advanced Introduction to Law and Digital Technologies. Edward Elgar Publishing.
Hadfield, G. K., & Clark, J. (2023). Regulatory Markets: The Future of AI Governance (Version 4). arXiv. https://doi.org/10.48550/ARXIV.2304.04914.
Hine, E., & Floridi, L. (2024). Artificial Intelligence with American Values and Chinese Characteristics: A comparative Analysis of American and Chinese governmental AI Policies. AI & SOCIETY, 39(1), 257–278. https://doi.org/10.1007/s00146-022-01499-8.
Julia Black; Andrew Murray. (n.d.). Regulating AI and Machine Learning: Setting the Regulatory Agenda. European Journal of Law and Technology (EJLT), 10(3). https://ejlt.org/index.php/ejlt/article/view/722/980?utm.
Kashmir Hill. (2025, August 27). A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. The New York Times.
Lewis, D., Lasek-Markey, M., Golpayegani, D., & Pandit, H. J. (2025). Mapping the Regulatory Learning Space for the EU AI Act (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2503.05787.
OECD. (2019). Artificial Intelligence in Society. OECD Publishing. https://doi.org/10.1787/eedfee77-en.
Perc, M., Ozer, M., & Hojnik, J. (2019). Social and Juristic Challenges of Artificial Intelligence. Palgrave Communications, 5(1), 61. https://doi.org/10.1057/s41599-019-0278-x.
Smuha, N. A. (2019). From a “Race to AI” to a ’Race to AI Regulation’—Regulatory Competition for Artificial Intelligence. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3501410.
Stasi, A. (2021). Actus Reus and Mens Rea. In A. Stasi, General Principles of Thai Criminal Law (pp. 25–30). Springer Singapore. https://doi.org/10.1007/978-981-15-8708-5_3.
Trigger Warning/Disclaimer: The below source mentions suicide. If you or someone you know is experiencing suicidal thoughts or a crisis, please reach out immediately for help. A hotline in your country can be found on befrienders.org.
Sue Matorin. (2025, September 6). The A.I. Chatbot as Therapist. The New York Times. https://www.nytimes.com/2025/09/06/opinion/ai-therapist-suicide.html.
Regulations:
California Senate Bill 53, Safe and Secure Innovation for Frontier Models Act (2024) https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB53 accessed 17 November 2025.
California Senate Bill 243 (2024) https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB243 accessed 17 November 2025.
Image Attribution
Generated by: ChatGPT 5.1
Date: 17/12/2025
Prompt: “Conceptualise the regulatory learning process amid the era of AI”