Who wrote this? The newsroom’s AI dilemma

Recently, a new name and face popped up in Jerusalem as the Middle East correspondent for one of the news publications I subscribe to. There was no doubt that this newbie is an experienced veteran journalist who writes very well. But I observed that this journo’s articles demonstrated a much deeper knowledge of the area, its history, politics and issues than what seemed like meagre “in country” boots on the ground experience justified.

Around the same time, I had become acquainted with the accessibility, efficiency and usefulness of AI – in the form of OpenAI’s ChatGPT (Chat Generative Pre-trained Transformer), the most popular and user-friendly chatbot available to ordinary, non-techie mortals [See in In That Howling Infinite’s The promise and the peril of ChatGPT]. It occurred to me then that the correspondent may have sought help from a mentor more convenient and less time consuming than professors Wikipedia and Google.

Holding this thought, I surmised that the pressure placed nowadays on news platforms by the downsizing of newsrooms, the redeployment of many correspondents to new overseas postings, and the need to feed the 24/7 news cycle, encouraged and indeed necessitated a resort to AI assistance in generating content.

It got me thinking about how artificial intelligence has crept into newsrooms like a silent partner with a knack for deadlines, reshaping not only how journalism is produced but how it is trusted. Once, reporting was firsthand, with local knowledge, conversations and interviews, and painstaking verification. Now, algorithms can summarise, translate, and even draft entire articles, producing work that reads as though it has been tempered by experience – and yet, no human hand may have touched much of it. Editors assure us humans remain in charge, but the reader is left to wonder: where does expertise end and machine assistance begin? In this new age, as AI hastens research and polishes prose, the signals that once guaranteed credibility – years of presence, insight and experience – could become vacant traces in the machinery of reportage.

When the reporter knows too much … the fragile trust between the newsroom and the reader 

AI arrived quietly, almost innocuously, slipping discretely the newsroom. What began as an experiment with automated sports recaps and quarterly earnings reports has grown into something far more consequential: reporters now consult large language models to research, summarise, translate, and sometimes draft the very words beneath their own bylines. Officially, humans remain the gatekeepers. In practice, however, the boundary between journalist and algorithm is porous, and with it, the foundations of trust.

In 2025, AI is routine but still controversial. Beyond what was initially formulaic reporting – sports scores, earnings, weather – journalists now employ AI for background research, translation, summarisation, and drafting features or opinion pieces. Outlets such as The New York Times, BBC, Guardian, ABC, Reuters, and the AP have policies designed to preserve accountability, protect sources, and maintain editorial oversight. Yet these rules vary in scope and transparency, and public labelling is inconsistent.

Corporate policies and protocols reflect the tension. The New York Times permits AI for research and idea generation but forbids publication of AI-generated text outright and warns against feeding it confidential material as it may be used by others. The BBC allows transcription, translation, and background work, yet insists on clear labelling and full editorial responsibility for AI-assisted content. The Guardian and Australia’s ABC bar AI from producing “core journalism content” without senior approval. Reuters, AP, and others adopt a pragmatic middle ground: AI may handle structured tasks, provided a human verifies the results.

Three principles recur across these guidelines. Responsibility for accuracy and balance rests with the journalist and not with the algorithm; AI is a back-office assistant, not a public face; and proprietary information must never be fed into commercial systems that might use it. The safeguards are reassuring on paper but slippery in practice: what precisely qualifies as “human verification”?

The subtler challenge is perceptual. AI reshapes the texture of reporting. A journalist arriving in a new and unfamiliar posting can use ChatGPT to call up instant timelines, political profiles, historical disputes, and past quotations. Within hours, someone with a modicum of on-the-ground experience can produce copy that reads as though it has been informed by years of learning and observation. The newcomer can now play a veteran, the parvenu masquerade as an expert. Readers who know the reporter’s history may sense an an uncanny proficiency – but detection requires fresh interviews, local sourcing, and on-the-scene observation.

All this challenges the implicit contract between journalist and audience. Bylines were once proxies for experience: a correspondent in Beirut or Baghdad wrote from authority earned on the scene and not from a chatbot’s training data. If AI provides the historical sweep and analytical polish once accrued over years, trust becomes fragile. The risk is subtle: not just factual error – though “hallucinations” remain a real threat – but a slow erosion of authenticity. News may be accurate albeit losing the human texture that signals lived engagement.

Current safeguards offer cold comfort. “Human in the loop” could mean a full rewrite or a quick skim. Internal disclosure rules are invisible to readers, and public labelling applies only when AI generates a significant portion of a story. Without independent audits or more granular transparency, audiences cannot know how much was machine-assisted or how rigorously it was verified.

The stakes are high. Journalism depends not just on facts but on the perception that those facts have been gathered, weighed, and conveyed by people willing to stand behind them. AI is a remarkable research assistant, a trove of background knowledge, yet its silent presence risks hollowing out the very authority that makes reporting valuable. Newsrooms that wish to preserve consumers’ confidence must move beyond vague assurances of “editorial oversight” and develop tangible ways to show readers when, where, and how AI and algorithms have shaped the work they consume.

It is entirely possible for a journalist to produce copy that reads as if informed by decades of personal fieldwork, simply because AI accelerates research and drafting. Until disclosure practices and independent audits become routine, the degree of AI reliance will remain largely invisible, leaving readers to judge authenticity through sourcing, original interviews, and the details of presence on the ground whether they are reading firsthand reporting or an AI- boosted desk job.

So, while artificial intelligence promises speed, breadth, and scope, it introduces instability into the journalist–audience relationship. The policies and protocols of major news platforms assure us that there is editorial oversight and human responsibility, yet they cannot show readers how much of a story was shaped by an algorithm or how deeply it was verified. The danger is that AI might fabricate facts, and also, simulate the authority of lived experience while concealing its origins. Until newsrooms adopt rigorous disclosure and public standards, trust in the press will rest on a fragile faith – one that must now account not only for human judgment but for the invisible influence of machines, those silent backroom gophers.

Coda

Confession time. This is where I must reveal the irony behind this essay. It examines AI, authenticity and trust, and yet, it was itself shaped by ChatGPT. In a dialogue between a human and an app, I asked questions, proposed arguments and considered answers, and having examined submitted examples of my writing style, an artificial collaborator has learned to simulate my voice and deliver much of what is written above. This might not be plagiarism as we currently define it – composed as it is from sources unknown to me – nor simple automation, but rather, perhaps, a kind of joint double act in which my thoughts, voice and style are preserved even as the machine learns to imitate the weave.

This is more than a clever conjuring trick. It illustrates the very dilemma this essay describes: how to maintain trust when technology can mirror an author’s cadence so faithfully that the boundary between lived expertise and fabricated fluency begins to blur. The words remain mine because I chose them, guided and approved them. Yet their swift and seamless arrival invites a question: if an algorithm can echo my style so convincingly, how do you discern the difference between a writer and a well-trained machine?

The answer is elusive – illusive even. At day’s end, it all comes down to the author’s perspective, judgement, integrity – the choice determining what to include and what to discard, what to emphasise and what to downplay. For the moment, these choices remain just beyond the algorithm’s grasp, though the gap may be narrowing and the distinction between discernment and dissembling will be harder to sustain.

This postscript is at once confession and proof: the very tools that threaten to hollow trust also exposes the fragile value of the human mind that is clutching the steering wheel. This essay proves its own point: a machine can mimic my voice, but only a human decides what truly matters – at the moment …

Written and refined with the help of ChatGPT

See also, in In That Howling Infinite, The promise and the peril of ChatGPT

Leave a comment