Blog

Our latest blog posts on topics related to natural language processing & alignment.

How Do LLMs Reason? The Power of Thinking Longer and Test-time Scaling

How Do LLMs Reason? The Power of Thinking Longer and Test-time Scaling

For years, the industry has focused on making models bigger. This training time scaling (Kaplan et al., 2020) made models highly fluent, similar to a student who memorised the entire textbook. But fluency is not the same as reasoning. Large language models (LLMs) still struggle with complex logic, maths or coding tasks because they respond too quickly, predicting the next word without truly thinking (McCoy et al., 2023).

Read More »
AI is Reshaping Regulatory Thinking

AI is Reshaping Regulatory Thinking

Trigger Warning/Disclaimer: This blog post mentions suicide. If you or someone you know is experiencing suicidal thoughts or a crisis, please reach out immediately for help. A hotline in your country can be found on befrienders.org.

AI is reshaping not only our social practices but also the foundations of regulatory thinking. The transformative power of AI has compelled regulators to adopt a regulatory learning process, shifting from static legal doctrine to an adaptive, learning-driven regulatory approach (Hadfield & Clark, 2023). This shift is driven by both the emergent challenges of AI and the motivation to devise laws that enable AI innovation while protecting against its potential risks (Smuha, 2019). As a result, we present some doctrine examples to argue that AI does not merely challenge existing legal rules but disrupts the obsolete assumptions underlying traditional regulations, making regulatory learning a structural necessity rather than a policy choice.

Read More »
LLMs as Tools in the Continuum of Human Cultural Evolution

LLMs as Tools in the Continuum of Human Cultural Evolution

Human culture is unique in how knowledge is transmitted and preserved, as distinguished from all other non-human cultures. This progress is referred to as the “ratchet effect” (Tomasello et al., 1993): a mechanism that faithfully conserves existing knowledge and skills within exchanges while also contributing new innovations. This dual process ensures the accumulation of cultural knowledge.

Read More »
Renewing Craftsmanship in the Age of AI

Renewing Craftsmanship in the Age of AI: Toward a Design Pedagogy of Care

Craftsmanship has long held a central place in art and design history. While often associated with form and aesthetics, its deeper emphasis lies in dedication, tradition and quality-in the care and attention given to the process of making. As technology accelerates and productivity becomes a dominant cultural value, design movements have emerged to resist this pace. Slow technology, for instance, encourages mindful engagement with products (Hallnäs & Redström, 2002), while speculative design and design fiction invite audiences to imagine alternative futures (Dunne & Raby, 2013; Bleecker, 2009). Yet both still centre primarily on the perception of the audience and on how the work is received or interpreted. Craftsmanship, by contrast, turns inward: it concerns the mode of practice, the values and sensibilities embodied by the maker in the act of creation. It asks not what is made, but how it is made, and how that process shapes the maker themselves. In the context of design education, this focus on practice makes craftsmanship particularly resonant: it cultivates an attitude, a rhythm and a sense of responsibility toward making that extends beyond outcomes.

Read More »
Navigating Truth and Accountability in the Age of AI Information

Navigating Truth and Accountability in the Age of AI Information

Journalism, as one of the main driving forces behind information flows in modern societies, has traditionally promoted itself as the medium of truth. The credibility of news institutions and the legitimacy of journalism as a profession have long rested on their ability to produce, verify and disseminate information grounded in factual accuracy and editorial integrity. Yet, in the era of artificial intelligence, these epistemic foundations are being profoundly challenged: generative AI does not only replicate or automate journalistic processes, but also potentially transforms them. The generative potential of AI introduces a new layer of uncertainty to news production, as tools that are neither human nor conscious are now producing texts with the marks of human authorship, originality and even moral voice.

Read More »
Tying the Knots of Trust: Understanding the Evolving Sociotechnical Ecosystem of Trust in LLMs

Tying the Knots of Trust: Understanding the Evolving Sociotechnical Ecosystem of Trust in LLMs

When we interact with a chatbot, ask a digital assistant for advice or rely on LLMs to summarise a long document, we are doing something profoundly human: we are trusting. Trust is part of what makes cooperation possible between people, but increasingly, also between people and machines. In the age of artificial intelligence (AI), and particularly with the rapid rise of large language models (LLMs), trust has become a central issue.

Read More »

Contact Us

FIll out the form below and we will contact you as soon as possible