Author – Stefano Sorrentino
Journalism, as one of the main driving forces behind information flows in modern societies, has traditionally promoted itself as the medium of truth. The credibility of news institutions and the legitimacy of journalism as a profession have long rested on their ability to produce, verify and disseminate information grounded in factual accuracy and editorial integrity. Yet, in the era of artificial intelligence, these epistemic foundations are being profoundly challenged: generative AI does not only replicate or automate journalistic processes, but also potentially transforms them. The generative potential of AI introduces a new layer of uncertainty to news production, as tools that are neither human nor conscious are now producing texts with the marks of human authorship, originality and even moral voice.
This moment defines a paradigm shift in our understanding of authorship and accountability. Where journalism once revolved around identifiable actors such as reporters, editors and institutions, today’s information ecosystem seems to lean toward a “hybrid authorship” model, shared between humans and algorithmic systems. As AI-generated texts are becoming the norm on digital platforms and search engines, the boundaries between fact and fabrication, between “news” and “content”, become less defined and more fluid, and the question of who is responsible for what we read (be it an individual, an institution, a dataset or a model) becomes more difficult to answer (Diakopoulos, 2019).
From Fixed to Fluid Texts
The notion of “fluid text” predates generative AI, as it emerged in the late 20th century as scholars started describing how digital media disrupted the stability once guaranteed by print (Bryant, 2002). In print tradition, text was a fixed object, edited and bound into permanence, bearing the marks of authority and defined finality. The invention of the printing press inaugurated what has been called the “Gutenberg Parenthesis”, a historical period during which the printed word dominated cultural transmission, stabilising knowledge and offering a shared epistemic framework (Pettitt, 2012).
Digital technologies began to impact that stability. Hypertext, versioning and collaborative authorship turned text into something mutable, interconnected and endlessly revisable, and with the arrival of generative AI, this fluidity has reached a new milestone. Text is no longer just re-edited or re-worked, but perpetually re-created. The same prompt can produce an infinite number of “original” outputs, each coherent yet contingent, plausible but also non-inherently verifiable. In this way, the text becomes dynamic, adaptive and fundamentally unstable, reflecting the non-deterministic logic of the underlying foundation models.
This fluidity challenges traditional assumptions about how we evaluate truth. When information can be seamlessly rewritten, condensed or reframed at this scale, the distinction between version and fabrication becomes difficult to navigate, with the stability that once underwrote journalistic credibility being replaced by a logic of continuous generation and regeneration. In this context, “truth” risks becoming a moving and unidentifiable target (Couldry & Mejias, 2019).
Reading and Writing in a Fluid Age
In an age when reading itself becomes automated and information gathering is an on-demand activity, with text seen as fluid, fungible and endlessly transformable in combination with other texts (including multimedia forms), encountering the original source text becomes a deliberate act, almost a form of proactive curiosity.
Readers who engage directly with original, human-written, unfiltered materials are likely to become more and more rare, and their experiences will differ radically (for the better or worse?) from those who rely on AI tools.
The very concept of “knowledgeable” may shift towards the ability to successfully navigate this fluid scenario, rather than being able to perform traditional search queries or research. Some may attempt to attract human readers through voice, tone and personality-elements that may resist algorithmic flattening (see “Creativity, Style and the Flattening Threat”). Others may write for AI, crafting content optimised for machine parsing rather than human resonance. New stylistic strategies may emerge to contrast automated reading, establishing forms of human expression that are not easily summarised. In this ecosystem, text may cease to be an end in itself and instead serve as a transitional medium toward other forms of communication, experience or synthesis.
Back to Orality?
In this fluid context, the rise of conversational AI systems makes the boundaries between written and spoken communication less defined, suggesting in some ways we may be returning, paradoxically, to a “digital” oral culture. As a matter of fact, it could be argued that interacting with an AI assistant resembles dialogue more than reading: responses are generated in real-time, tailored to context and often ephemeral. Much of this communication never leaves the private space of a chat window, often existing momentarily and disappearing without trace.
Media theorist Walter Ong (1982) once described the emergence of a “secondary orality”, in which modern technologies revive the participatory, immediate qualities of oral culture while maintaining textual affordances. AI-powered dialogue systems amplify this further, dissolving distinctions between reading, listening and speaking. Instead of engaging with stable texts, we are now navigating a world of continuous verbal exchange, fragmentary, improvisational and infinitely replicable.
This resurgence of “digital” orality carries profound implications for how truth circulates. Oral traditions rely on trust, presence and community, rather than external verification, but in a digital environment dominated by AI-generated speech and text, these mechanisms are easily manipulated. What feels conversational and authentic may in fact be algorithmically produced and contextually misleading, risking to produce “artificial intimacy”. For these non-secondary aspects, the epistemic grounding of truth itself, once stabilised by textual permanence, is destabilised once more.
Implications for Truth and Accountability
For these reasons, in the age of fluid AI information, truth and accountability face unprecedented challenges.
The first is traceability: AI systems often generate content based on massive, opaque datasets, making it challenging to trace specific claims to verifiable sources. Traditional journalistic methods, such as fact-checking, attribution and editorial oversight, rely on stable referents. When information is generated probabilistically, these referents dissolve. The question “Where did this come from?” becomes hence difficult to answer.
Second is responsibility. If an AI-generated article spreads misinformation, who is accountable? The developer, the publisher or the user who prompted it? Existing ethical and legal frameworks, designed for human authorship, struggle to accommodate collective and distributed forms of creation. The absence of clear responsibility risks enabling a culture of epistemic evasion, where no one bears full liability for the consequences of false or misleading content (Gillespie, 2018).
Third is epistemic authority. AI-generated texts can imitate the voice of expertise so convincingly that readers may accept them as authoritative. Yet, such authority lacks grounding in human judgment or lived experience. Over time, this may erode public trust in legitimate institutions of knowledge, from journalism to academia. The danger lies not only in the spread of false information but in the gradual normalisation of uncertainty, possibly leading to a condition where truth and accountability become optional.
To navigate this scenario, societies will need to redefine both truth and accountability for an age of fluid information. This means developing transparent systems for documenting AI-generated content, enforcing traceable attribution, and promoting literacy that emphasises critical engagement over passive consumption. It also requires reasserting the value of human editorial judgment, of voices willing to stand behind what they publish, in contrast to the anonymity of algorithmic output.
References:
Bryant, J. (2002). The Fluid Text: A Theory of Revision and Editing for Book and Screen. University of Michigan Press.
Couldry, N., & Mejias, U. A. (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.
Diakopoulos, N. (2019). Automating the News: How Algorithms Are Rewriting the Media. Harvard University Press.
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
Ong, W. J. (1982). Orality and Literacy: The Technologizing of the Word. Methuen.
Pettitt, T. (2012). Bracketing the Gutenberg Parenthesis: Print, Performance, and the History of Text. Comparative Literature Studies, 49(3), 351–369.
Further Reading/Watching/Listening:
Hayles, N. K. (2005). My Mother Was a Computer: Digital Subjects and Literary Texts. University of Chicago Press.
Bolter, J. D. (2001). Writing Space: Computers, Hypertext, and the Remediation of Print (2nd ed.). Routledge.
“What’s happening to reading?”, by Joshua Rothman, The NewYorker https://www.newyorker.com/culture/open-questions/whats-happening-to-reading.
Image Attribution
Generated by: Midjourney
Date: 27/10/2025
Prompt: “A surreal digital illustration showing a journalist at a desk where newspapers merge into streams of code and text. The reflection in the laptop screen appears half-machine, symbolising hybrid authorship. In the background, a newsroom fades into a digital matrix. The mood is thoughtful and modern, in shades of blue and gold, representing the tension between truth, AI and accountability.”