I use ChatGPT daily. Probably too much. I oscillate between awe and unease – between marvel at the eloquence of the machine and a wary recognition of the echo chamber it builds around us. Artificial Intelligence, so-called, is a conjuring trick of extraordinary scale: it digests the collective work of humankind and speaks it back to us, a mirror that flatters, distorts, and occasionally reveals.
When one has a blind spot – in the retina or the soul – the brain fills in the missing detail from instinct or memory. That is the magic, or the peril, of AI: it hallucinates our absences. When the data runs out, it dreams; when we falter, it improvises. It’s not so different from us, except that we invented it to remind us of ourselves.
Yet the paradox persists. The chatbot depends upon the very content it displaces. It feeds upon the living imagination even as it threatens to automate it. The web economy, once a marketplace of minds, now trembles under the weight of its own reflection. The abyss, as Nietzsche warned, is a slow and patient gazer. Look too long into the algorithm, and you may find the algorithm looking back — articulate, deferential, and strangely familiar.
Still, I keep asking questions. Perhaps because I suspect that in its confabulations lie our own. Perhaps because every hallucination hides a truth we have forgotten. Or perhaps simply because, as writers and readers, we cannot resist the shimmer of the mirror, even when we know the mirror is also a mask.
AI and the Future of the Web: Between Convenience and Curiosity
As ChatGPT and its rivals reshape the internet, the real threat is not the loss of information, but the erosion of inquiry itself.
On 5 October 2025, The Economist published an article titled “AI is killing the web. Can anything save it?” The piece argued that the rise of AI chatbots is undermining the economic and cultural foundations of the internet, threatening the very ecosystem that sustains journalism, creative work, and niche content. This essay reflects on the article’s warnings, exploring not only the economic and structural consequences of generative AI but also its broader cultural implications — and, ironically, the ways in which deliberate, interrogative engagement with AI can preserve the curiosity the article fears is vanishing.
The rise of AI chatbots such as ChatGPT is reshaping the internet’s economic and cultural ecosystem. For decades, the web operated on a delicate bargain: users accessed information and entertainment for free, while content creators and platforms monetized that attention via advertising or data collection. This system sustained journalism, creative industries, and countless niche websites. Now, AI threatens to unravel it. Chatbots can answer questions, summarize articles, and generate content with increasing sophistication, often without sending users to the original sites. Traffic – and revenue – flows away from the websites that produced the material in the first place.
The implications are far-reaching. If content creators cannot earn a living, the diversity and quality of material on the web could decline sharply. Specialized journalism, independent blogs, and even large media outlets might struggle to survive if audiences bypass them in favor of AI-generated summaries. This is not just a financial issue, but a cultural one: the internet’s richness relies on a wide ecosystem of creators, whose incentive to contribute diminishes when AI intermediates access to their work.
The piece examines potential remedies but casts doubt on any quick fix. Subscription models might help, though they risk fragmenting the web and excluding users who cannot pay. Legal or regulatory approaches — requiring AI to cite sources or compensate content creators — face practical and global coordination challenges. Revenue-sharing between AI developers and content producers remains largely theoretical. The Economist concludes that while AI offers enormous utility, it may simultaneously hollow out the ecosystem that makes the web vibrant, informative, and sustainable. In other words, the very tool designed to amplify access to information could starve the sources of that information, threatening the web’s long-term health.
Beyond the economic threat lies a deeper cultural shift. By mediating access to knowledge, AI risks centralizing authority over information in the hands of a few large platforms. Where users once navigated a decentralized web — from blogs to academic repositories – AI now acts as a single gateway, subtly shaping what people see and how it is framed. This raises questions of epistemic diversity: whose knowledge is amplified, whose is marginalized, and how errors or biases propagate when AI interprets the web.
Generative AI also encourages a culture of consumption over engagement. Users increasingly rely on synthesized answers rather than visiting original sources, participating in discussions, or evaluating evidence themselves. Over time, this could erode critical thinking, reduce exposure to divergent viewpoints, and reinforce algorithmic echo chambers. AI promises unprecedented access to information, yet by centralizing authority, it may shrink the very informational ecosystem it draws upon.
Importantly, this is no longer a speculative threat. AI is not poised to take over content creation — it has already arrived. Large language models are actively writing text, generating articles, essays, and summaries at scale. Meanwhile, readers are already turning to chatbots to précis, distill, or interpret that very material. The paradox is immediate and self-referential: the AI produces the text, and the AI digests it, while humans step in chiefly to interrogate, verify, and contextualize. In this recursive loop, the risk of distortion compounds, especially when hallucinations – the machine’s way of filling gaps in knowledge – are left unexamined.
A particular dimension of this phenomenon lies in how AI generates those hallucinations. These arise because the AI depends entirely on human-generated content to construct its answers. When gaps, biases, or missing information appear, the model must invent plausible-seeming outputs. An apt analogy comes from human vision: when someone has a blind spot or an obstruction in the retina or optic nerve, the brain instinctively fills in the missing details, drawing on memory, context, or subconscious “magic” to create a coherent image. Similarly, when a language model encounters missing content, it fills the gap, generating answers that seem authoritative but may not reflect reality. The AI, like the brain filling a visual blind spot, is striving for completeness, yet it can mislead if we do not interrogate its outputs. The lesson is clear: just as we know the eye can be tricked, so must we question AI’s “sight,” balancing reliance with critical vigilance.
Yet the irony is striking: engaging with AI is also the way to test its warning. To question AI is still to use it; to précis is still to probe. When used dialogically – as a partner in thought rather than a vending machine — AI can revive the exploratory spirit it threatens. What the Economist calls “the death of the web” might instead be its metamorphosis: a shift from a sprawl of pages to a lattice of conversations, provided users insist on curiosity over convenience, interrogation over automation.
Ironically, reading this warning through AI embodies exactly the behaviour the article fears might disappear. But here lies the distinction: using AI as a tool for interrogation, not substitution. By asking follow-up questions, seeking all sides of an argument, and qualifying answers, one preserves the habits of critical thinking and active engagement. AI becomes an interactive partner, prompting reflection rather than replacing it. Preservation of curiosity — and the web’s intellectual richness — depends not on abstaining from AI, but on using it with vigilance, discernment, and a willingness to probe deeper than the first answer it provides.
Viewed this way, AI may not spell the end of the web but its transformation. The sprawling landscape of pages may evolve into a lattice of conversations, mediated by intelligent systems but animated by human questions, scepticism, and moral reflection. Survival of the web depends on whether users insist on curiosity over convenience, engagement over automation. It depends on whether we continue to ask: Where did this knowledge come from? Who benefits from its presentation? What has been omitted? Machines can synthesize words, but they cannot yet replicate the moral and intellectual labour of wondering why those words matter.
In short, the web will not die at the hands of AI; it will die only if its users stop thinking. The technology challenges us, but it also offers an opportunity: to transform passive consumption into deliberate inquiry, to turn a warning into a call to action, and to ensure that curiosity remains the lifeblood of the digital commons. The threat is not the web itself, but the spirit of engagement that makes it meaningful. By interrogating, questioning, and balancing – and by understanding the limits of AI’s “vision” – we preserve that spirit, demonstrating that the human capacity for reflection remains, for now, irreducible.
Coda
And so the circle closes: the machine writes, the reader queries, the machine replies, and the reader wonders whether the wondering itself has been automated. Perhaps this is what Voltaire foresaw in jest – that if the truth does not exist, we will simply have to invent it – though he could hardly have imagined silicon doing the inventing. Yet invention, in all its ambiguity, remains a human art: the ability to doubt, to test, to laugh at the illusion even as we depend upon it. The web may flicker, morph, or narrow beneath the weight of automation, but the act of questioning – that restless, unprogrammable impulse – is what keeps it alive. If AI fills the blind spots, it is still up to us to notice the seams. And perhaps that is where the future of thought now resides: not in what the machine can see, but in what we still suspect it cannot.