In early November last year, we published The most nihilistic war ever …Sudan’s waking nightmare, a harrowing piece about the atrocities being committed in the West Dharfur region of civil-war torn Sudan. A friend commented on the article, accusing me of intellectual dishonesty in comparing the international outcry over Gaza to the silence on Sudan. His comment was not the first of similar justifications:
“ … with respect to the lack of outrage, the mainstream media can stir outrage on any topic when its political masters and financial backers want it to. Why has it not done so in this instance? Follow the money is one rule of thumb. I assume it suits the powers that be to let the slaughter continue. I hope more people are inspired to become activists against this dreadful situation, but public opinion tends to follow the narrative manufactured by the media more than impel it. When it comes to pro-Palestinian activism it is the story of a long hard grind of dedicated protestors to get any traction at all against the powerful political and media interests which have supported the Israeli narrative and manufactured global consent for the genocide of Palestinians over many years. And still, although the tide is gradually turning, the West supports Israel to the hilt and crushes dissent. Using the silence in the media and in the streets over the slaughter in Sudan as an excuse to try and invalidate pro-Palestinian activism is a low blow and intellectually dishonest”.
This response is articulate and impassioned, but it also illustrates precisely the reflexive narrowing of moral vision that the comparison between Gaza and Sudan was meant to illuminate. His argument hinges on a familiar syllogism: that Western media outrage is never organic but always orchestrated (“follow the money”), that silence on Sudan therefore reflects elite indifference rather than public apathy, and that to highlight that silence is somehow to attack or “invalidate” the legitimacy of pro-Palestinian activism. It is a neat, closed circuit – morally reassuring, rhetorically watertight, but intellectually fragile.
In That Howling Infinite quizzed ChatGPT to collate, distill definitions and explanations of intellectual dishonest because we sensed its presence everywhere in the debate, including – uncomfortably – around my own thinking. Not as accusation, but as inquiry. The Gaza war has a peculiar way of forcing moral positions to harden quickly, of rewarding certainty and punishing hesitation, of turning complexity into suspicion. In that climate, asking what intellectual dishonesty actually looks like felt less like an abstract exercise than a necessary act of self-defence.
An ideological comfort zone
Intellectual dishonesty, then, is the deliberate or unconscious use of argument, rhetoric, or selective reasoning to defend a position one knows – or should know – is incomplete, misleading, or false. It is less about lying outright and more about distorting truth for ideological comfort. It includes cherry-picking evidence, using double standards, appealing to emotion over reason, or refusing to acknowledge valid counterarguments. You could even call it “lying to oneself”, and truth be told, we are all guilty at one time or another.
Regarding the Gaza war, intellectual dishonesty is everywhere, on both sides of the divide, magnified by mainstream and social media’s hunger for moral simplicity and viral outrage. What begins as solidarity curdles into slogan; what starts as empathy ossifies into orthodoxy. And because this conflict sits at the intersection of history, identity, trauma, and power, the temptation to simplify—to choose a side and suspend thinking is especially strong.
I asked the question, then, not to sit in judgement above the fray, but to understand how easily moral seriousness can slip into moral performance, and how even good intentions can narrow rather than enlarge our field of vision.
Intellectual dishonesty is rarely the bald lie. More often it is the careful omission, the selective emphasis, the comfortable narrowing of vision that allows us to remain morally certain while thinking we are being rigorous. It is the use of argument, rhetoric, or evidence not to discover what is true, but to defend what feels right. Cherry-picking, double standards, euphemism, emotional substitution for analysis, the refusal to sit with uncomfortable counter-truth – these are not failures of intelligence so much as failures of discipline. They are the betrayal of thought in service of tribe.
Nowhere is this more visible than in the discourse surrounding Gaza. On all sides, intellectual dishonesty flourishes, amplified by mainstream and social media systems that reward moral clarity over moral accuracy, outrage over comprehension, and certainty over doubt. The war has become not merely a catastrophe but a stage upon which external protagonists perform their own identities, anxieties, and loyalties.
On the pro-Israel side, intellectual dishonesty often takes the form of moral laundering. Hamas’s atrocities – October 7, the hostages, the tunnels, the use of UN personnel and facilities – are rightly invoked, but too often as a solvent that dissolves all subsequent scrutiny. Civilian deaths become “collateral damage,” mass destruction becomes operational necessity, and a stateless, blockaded and exposed population is rhetorically elevated into a symmetrical belligerent confronting one of the most powerful militaries on earth. Euphemisms do heavy lifting: “targeted strikes,” “human shields,” “complex urban environments.” Criticism of Israeli policy is collapsed into antisemitism, not to defend Jewish safety but to foreclose moral argument. What is omitted – the occupation, the blockade, the decades of dispossession and accumulated trauma – is as important as what is said.
On the pro-Palestinian side, dishonesty manifests differently but no less pervasively. Moral outrage hardens into narrative absolutism. Hamas’s crimes are erased, justified, or absorbed into the abstraction of “muqawama”, resistance, or “sumud”, resilience, collapsing the distinction between combatant and civilian. Violence is romanticised, militants transfigured into symbols, their authoritarianism and indifference to Palestinian life quietly excised. Empathy becomes selective: Gazan children are mourned, Israeli families are passed over, or worse, subsumed into theory. History is flattened into a single moment of victimhood, stripped of Arab politics, Islamist extremism, regional failure, and internal Palestinian fracture. The powerful are cast as pure evil, the powerless as pure good, until reality itself becomes an inconvenience.
Mainstream media does not correct this; it accelerates it. Impartiality is performed while distortion is practised. Headlines flatten causality, images are severed from context, asymmetry is neutralised by “both sides” language. Social media perfects the process. Algorithms reward fury, not thought; spectacle, not inquiry. Influencers weaponise empathy itself – choosing which corpses to count, which cities to name, which pictures to publish (sometimes none to fussy about which war they portray), and which griefs to amplify. Moral clarity is produced without moral responsibility.
Beneath all this lies a deeper dishonesty, one that is existential rather than rhetorical. Each side insists its justice is indivisible, when in truth each vision of justice requires the other’s erasure. Gaza becomes less a human tragedy than a mirror onto which Western actors project their unresolved conflicts about empire, identity, guilt, and power. It is here that intellectual dishonesty ceases to be merely argumentative and becomes moral.
This is where the comparison with Sudan – and any forgotten or ignored war in this sad world – becomes instructive and also uncomfortable. When the relative silence surrounding Sudan’s catastrophe is raised, it is often dismissed as “whataboutism” or as an attempt to diminish Palestinian suffering. That response itself reveals the problem. The point is not to weigh body counts or rank atrocities, but to interrogate how empathy is distributed. Why does one horror become the world’s moral touchstone while another, no less vast or humanly devastating, barely registers?
The easy answer – “follow the money,” “manufactured outrage” – “media conspiracy” – “the Jewish Lobby” – is reassuring but incomplete. Western silence on Sudan is less conspiracy than exhaustion. Sudan offers no tidy morality play. No clean colonial narrative. No villains easily costumed for Instagram. Its war is fragmented, internecine, post-ideological: warlords, militias, foreign patrons, gold under rubble. It resists hashtags. Gaza, by contrast, offers clarity, identity, and the comforting architecture of blame. Victims and oppressors are sharply drawn; the script is familiar; moral alignment confers belonging.
In Sudan, millions starve while the gold glitters in the darkness deep beneath their feet. In Gaza, ruins are televised, moralised, and weaponised. Both are human catastrophes. Only one has an audience.
To point this out is not to invalidate solidarity with Gaza. It is to expose the limits of our moral imagination. Empathy that depends on narrative simplicity is not universalism; it is performance. Compassion that requires a script is conditional. If justice is truly the aspiration, it must be capacious enough to grieve Darfur and Khartoum alongside Gaza City, to care even when the cameras turn away.
Bringing it all back home …
And this brings the argument uncomfortably close to home. Are we too guilty of intellectual dishonesty? To be I honest, yes – probably, at least sometimes. But then, who isn’t? The Gaza war is a moral minefield where even careful minds lose their footing. Passion bends the lens; grief distorts perspective; certainty is seductive. No one who cares deeply escapes the pull of identification.
Endeavouring to see all sides of an argument, age, experience, knowledge, empathy – and a growing impatience with historical illiteracy and intellectual laziness – inevitably shape what we see. A lifetime hatred of antisemitism runs through them as well, a moral watermark that does not fade simply because the world grows louder. These influences are not disclaimers; they are facts. Not excuses, merely coordinates. If an argument is bent to fit a moral arc, felt more keenly for one set of victims, or wearied of slogans masquerading as history, then yes -we have been partial.
The difference lies in knowing it. Intellectual dishonesty becomes moral failure only when it is unacknowledged, when narrative becomes more important than truth, when the lens is never turned inward. What resists dishonesty is reflexivity: the willingness to ask whether one is being fair, whether one is seduced by one’s own argument, whether omission has crept in disguised as clarity.
So yes – guilty, but aware. Fallible, but striving. He who is without sin, after all, should be cautious about throwing stones, especially from within a glasshouse. Perhaps that is as close as any of us come to honesty: to keep turning the lens back on ourselves, again and again, until the view clears – or at least steadies enough to see by.
And that, is arguably not a failure of honesty but a condition of it. To articulate one’s influences is to refuse the pretence of neutrality, to acknowledge that objectivity is not the absence of bias but the discipline of recognising it. Impatience with ignorance is, at its core, a moral impatience: a refusal to see human tragedy flattened into slogans or history reduced to talking points. The danger, of course, is fatigue – after decades of watching the same horrors recur, empathy can harden into exasperation. But awareness of that tendency is itself a safeguard.
We are participants in the long conversation of conscience – who know that clarity and compassion rarely sit still in the same chair, but who insists they at least keep talking. In an age that prizes certainty above understanding, that may be the most honest posture left: to keep turning the lens back on ourselves,, resisting the comfort of tribe, and refusing to let thought become merely another form of allegiance.
Author’s Note …
This opinion piece is one of several on the the attitudes of progressives towards the Israel, Palestine and the Gaza war. The first is Moral capture, conditional empathy and the failure of shock, a discussion on why erstwhile liberal, humanistic, progressive people from all walks of life have been caught up in what can be without subtly described as that anti-Israel machinery.Standing on the high moral ground is hard work!discusses the issues of free speech and “cancellation”, and boycotts with regard to the recent self-implosion of the Adelaide Writers’ Festival, one of the country’s oldest and most revered.
There are moments when public argument stops being a search for truth and becomes a test of belonging. Facts are no longer weighed so much as auditioned; empathy is rationed; moral language hardens into a badge system, issued and revoked according to rules everyone seems to know but few are willing to articulate. One learns quickly where the trip-wires are, which sympathies are permitted, which questions are suspect, and how easily tone can outweigh substance.
What interests me here is not the quarrel itself – names, borders, histories—but the habits of mind it exposes. The ease with which conviction can slide into choreography. The way intellectual honesty is praised in the abstract and punished in practice. The curious transformation of empathy from a human reflex into a conditional licence, granted only after the correct declarations have been made.
Across these pieces I circle the same uneasy terrain: the shaping of facts to fit feelings; the capture of moral language by ideological gravity; the performance of righteousness as both shield and weapon. Cultural spaces that once prided themselves on curiosity begin to resemble courts, where innocence and guilt are presumed in advance and the labour lies not in thinking, but in signalling.
This is not an argument against passion, nor a plea for bloodless neutrality. It is, rather, a meditation on how quickly moral seriousness curdles into moral certainty – and how much intellectual work is required to stand on what we like to call the high ground without mistaking altitude for clarity.
The position of In That Howling Infinite with regard to Palestine, israel and the Gaza war is neither declarative nor devotional; it is diagnostic. Inclined – by background, sensibility, and experience – to hold multiple truths in tension, to see, as the song has it, the whole of the moon. It is less interested in arriving at purity than in resisting moral monoculture and the consolations of certainty. That disposition does not claim wisdom; it claims only a refusal to outsource judgment or to accept unanimity as a proxy for truth.
On Zionism, it treats it not as a slogan but as a historical fact with moral weight: the assertion – hard-won, contingent, imperfect – that Jews are entitled to collective political existence on the same terms as other peoples. According to this definition, this blog is Zionist. It is not interested in laundering Israeli policy, still less in romanticising state power, but rejects the sleight of hand by which Israel’s existence is transformed from a political reality into a metaphysical crime. Zionism is not sacred, but its delegitimisation is revealing – because it demands from Jews what is demanded of no other nation: justification for being.
On anti-Zionism, it has been unsparing. It sees it not as “criticism of Israel” (which you regard as both legitimate and necessary) but as a categorical refusal to accept Jewish collective self-determination. What troubles it most is not its anger but its certainty: its moral absolutism, its indifference to history, its willingness to borrow the language of justice to license erasure. It is attentive to how anti-Zionism recycles older antisemitic patterns – collectivisation of guilt, inversion of victimhood, and the portrayal of Jews as uniquely malignant actors – while insisting, with studied innocence, that none of this concerns Jews at all. If not outright antisemitism, the line separating it from anti-Zionism is wafer—thin, and too often crosses over.
The interest in moral capture is analytical rather than accusatory. It is not arguing that writers, academics, or institutions are malicious; rather, it are argues that they have become intellectually narrowed by the desire to belong to the “right side of history.” Moral capture explains how good intentions curdle into dogma, how solidarity becomes performative, and how the fear of social exile replaces the discipline of thought. It accounts for the strange phenomenon whereby intelligent people outsource their moral judgment to slogans, and experience constraint not as an intolerable injury to the self.
The Adelaide Writers’ Festival affairis seen not primarily about Randa Abdel-Fattah, nor even about free speech. It is a case study in institutional failure and cultural self-deception. The mass withdrawals are viewed not as acts of courage or principle but as gestures of affiliation – ritualised displays of virtue by people largely untouched by the substance of the dispute. What is disturbing is the asymmetry: the speed with which a festival collapsed to defend eliminationist rhetoric, and the silence that greeted the doxxing, intimidation, and quiet cancellation of Jewish writers and artists. Adelaide did not fall because standards were enforced, but because those standards were applied selectively and then disowned at the first sign of reputational discomfort.
Running through all of this is a consistent stance: a resistance to moral theatre, an impatience with historical amnesia, and a belief that intellectual honesty requires limits – on language, on fantasy, and on the indulgent belief that one’s own righteousness exempts one from consequence.
We are not asking culture to choose sides; you are asking it to recover judgment
What is there to say about AI? Especially when it can say everything for us anyway. But then again, can it really? What AI says is not original or unique. Thats what writers are for. AI can copy but it can’t create.
Australian author Kathy Lette, The Australian 8 August 2025
ChatGPT won’t replace your brain – but it might tempt you to stop using it . And it might replace your favourite author if we’re not careful. The trick isn’t making it think for you, it’s making it think and work with you ethically, creatively, and honestly.
Chat GPT on the author’s request 8 August 2025
ChatGPT is like fire: incredibly useful, potentially dangerous, and impossible to put back in the bottle. The challenge for the rest of us is to learn to use it with eyes wide open – neither worshipping it as a digital oracle nor dismissing it as a passing gimmick.
Chat GPT on the author’s request 8 August 2025
AI has been spruiked as bringing an intellectual revolution as profound as the Enlightenment, but the glow has dimmed: there are reports of its use as a propaganda tool to interfere with US elections and the International Labour Organisation estimated 70 per cent of the tasks done by humans could be done or improved by AI, including 32 per cent of jobs in Australia.
A very informative interview on 11 July on Fareed Zakaria’s The Public Square., Jensen Huang, the Taiwanese American CEO of superconductor manufacturer NVidia talks about the Strength, Weaknesses, Opportunities and Threats of AI. We as nations, as societies, the human race, really, have to take the opportunities and manage the risks. That is the difficult part. He recommends that open-minded people give it a try. Be curious, he advised. Embrace the new.
Whilst the corporate word rushes to embrace the AI revolution, us lesser mortals have rushed to acquaint ourselves with one or more of the many chatbots now available. to regurgitate but to generate information fluently about almost any field. A timely and highly informative albeit lengthy explainer in The Sydney Morning Herald, noted that more than half of Australians say they use AI regularly. And yet, it added, less than a third of those trust it completely.
Having tasted the tempting fruits of OpenAI’s ChatGPT (Chat Generative Pre-trained Transformer), the most popular and user-friendly chatbot available to ordinary, non-techie mortals, I find it all exciting and scary. I would add to Hueng’s advice: ask the right questions: question the answers; and, always, ask for a second or third opinion. And don’t hesitate to contradict and correct – never take a chatbot completely at its word.
One learns very quickly that the value of what we derive from it is dependent on the goals we set and the boundaries we set out for it. It is not always predictable, and can sometimes be dead wrong, but it works much better when we give it specific targets and clear confines to work within. When asking it a question, it is important that you have a very good idea of the answer or you may get inaccuracy or potentially, misinformation. I’ve tested it on several different subjects, and on a whim, I’ve even asked it to write poetry. I have concluded that the chatbot can be a very useful tool, a kind of solo brainstorming. But it should not be a substitute for impartial research, peer-reviewed analysis and wider-reading – and it should never, ever be regarded as an infallible source or as some kind of deity.
I began my relationship with ChatGPT by asking questions about political and historical subjects that I already knew quite a bit about. I progressed to asking more probing questions, and even disputing the answers provided – to which the chatbot responded with courtesy and corrections, clarifications and even additional, often insightful contributions, posing further ideas and questions and suggesting other avenues of inquiry. It can feel like you’re engaging in a kind of online conversation – a discussion or debate even. Rather than encountering obfuscation, it can feel like an exploration, a path to truth even – or at least, a semblance of it. At the risk of going all anthropomorphic, regarding this and other subjects, it can feel a lot like you’re having a debate with a very well informed person.
But you can’t trust it completely nor let it do your thinking for you. You have to ask the right questions: question the answers; and, always, ask for a second or third opinion. And you mustn’t hesitate to contradict and correct – never take a chatbot completely at its word. But, of late, I’ve I find I’m using ChatGPT as my first port of call for general inquiries and for more detailed research instead of resorting to Doctor Google and Professor Wiki.
ChatGPT is also an effective editor. If you have written a long and rambling draft of an essay or article, it will tidy and tighten it up, correcting spelling and grammar, removing repetition, paring down phrasing, and improving narrative flow; and yet remaining close to the original draft, retaining its depth and illustrative detail but with smoother flow, less repetition, and more consistent tone. Moreover, it can also add footnotes and references to sources so it reads more like a polished essay for publication or academic use. One must always check the new against the old, however as details and turns of phrase you regard as important or interesting can be purged in the process, whilst whole passages can actually disappear.
But, getting the chatbot to do all the hard work can make you lazy. Why spend hours of a busy life doing the hard yards when, with a couple of questions abd a few guide posts, a click of the keyboard will give you an answer, even an essay, in seconds? Why read a whole book or article when you can obtain a one page synopsis, review or analysis in a trice.
And then there’s the big catch. If one uses a chatbot for “research”, for an edit, a summary or an outline, an article or essay, even, how much is owed to the chatbot, and how much can one can one claim that in part or in whole, is original work? While the chatbot often reframes one’s text in its own words, at times, it will elaborate and offer its “own” opinion. Remember, it is a learning machine, not a thinking machine, and that It will have derived this opinion from somewhere and, importantly, someone. Beware then the temptations of cheating and plagiarism.
One thing I’ve learned from using ChatGPT is that unlike google or Wikipedia, it doesn’t like to not give you an answer, so if it doesn’t know anything, it will try to bullshit you. As a test, I’ve even invented a words, and when I’ve given it some context, it comes back with a detailed meaning and examples of usage and a comment along the lines of: “The word has not yet entered standard English dictionaries, but it’s an excellent example of neologism – newly coined term or expression, often created to describe something that doesn’t have a precise name”.
ChatGPT has its uses, therefore, but also its limitations, and don’t forget that chatbots are learning machines, and once you interact with a chatbot, it learns from you and about you. You are now a part of its ever expanding universe. I’m reminded of that old quote of Friedrich Nietzsche’s: “Beware that, when fighting monsters, you yourself do not become a monster… for when you gaze long into the abyss, the abyss gazes also into you.”
Grave New World
For all its potential comprehensiveness, its attractiveness and convenience, ChatGPT is a seductive portal into a not so brave new world.
AI is tool, like a pen or a spanner, and not a person – although you’d be tempted to think so once you engage in a complex discussion with ChatGPT. It can build but cannot create, and should therefore enhance human effort, not replace it. But, as Helen Trinca noted in The Australian on 9 August 2025 that “with greater acceptance has come the recognition, by some at least, that big tech companies have been ripping off the work of creatives as they scrape the net and build the incredibly brilliant AI tools many of us love to use … the tools we use regularly for work and play have already been trained on databanks of “stolen” material”.
It’s still less than three years since the first version of ChatGPT and, as the fastest growing tech product in history, it started to reshape work, industry, education, social media and leisure. International tech companies are at the stage of training large language models such as ChatGPT and building data centres. At the moment, all AI usage of mining or searching or going across data is probably illegal under Australian law. But earlier this month, the Productivity Commission released a harnessing data and digital technology interim report that proposed giving internationally owned AI companies exemptions from the Australian Copyright Act so they can mine copyrighted work to train large language models like ChatGPT: novels, poems, podcasts and songs can be fed into AI feeders to fuel their technological capabilities and teach the machines to be more human. Without permission and without compensation, on the dubious expectation that this would make the country more “productive”. Artists, writers, musicians, actors, voice artists and entertainment industry associations and unions are outraged, and there is a growing backlash against what is perceived as a runaway technology.
Stories, songs, art, research, and other creative work are our national treasures, to be respected and defended not to be “mined” and exploited. It should be done legally, ethically and transparently under existing copyright arrangements and laws. Not by stealth and by theft and bureaucratic skullduggery and jiggery-pokery. And there is now recognition that it is imperative to find a path forward on copyright that allows AI training to take place in Australia while also including appropriate protections for creators that make a living from their work. If we really truly believe in copyright, we need to make the case for enforcement, not retrospective legalisation of government-sanctioned product theft.
Contemplating the challenges, opportunities and threats of AI, I decided to go directly to the source and ask the Chat GPT itself what it considered to be its up and down sides. It was remarkably frank and, dare I say, honest and open about it. I am very certain that I am not the first to ask it this question, and at the risk of sounding all anthropomorphic, I am sure it saw me coming and had its answers down pat. I’m pretty certain I am not the first to ask.
The chatbot’s essay follows. Below it, I have republished four articles I recommend to our readers which corroborate and elaborate on what I have written above.
The first is a lengthy and relatively objective “explainer” well worth the time taken to read it. The others are shorter, polemical and admonitory. One riffs on the opening sentence of Karl Marx’s infamous manifesto : “A spectre is haunting our classrooms, workplaces, and homes – the spectre of artificial intelligence”. Each asks whether in its reckless use we may end up choosing a machine over instinct, intuition, and critical thinking. This is particularly relevant in secondary and higher education. Schools and universities should not dictate what to think but teach how to think: how to grapple with ideas, test evidence, and reason clearly. To rely instead on chatbots cheapens the value of learning.
A more light-hearted piece argues that the most immediate danger of AI is the Dunning-Kruger effect – the cognitive trap where the incompetent are too incompetent to see their own incompetence. As David Dunning himself warned, the ignorant “not only make mistakes, but their deficits also prevent them from recognising when they are making mistakes and other people are choosing more wisely.” AI, she argues, “is the Dunning-Kruger effect on steroids. Large language models are slick word predictors, not truth-tellers. They parrot bias, hallucinate facts, and tailor answers to echo the user’s worldview – all while delivering their fabrications with supreme confidence”. If AI were a person, it would be psychology’s perfect case study in misplaced certainty”. Much as with the algorithms that infect and corrupt social media, users who chose to take the chatbot’s word rather than looking further may end up being fed ideology dressed up as knowledge. Perhaps that Marxian aphorism is quite apposite after all. Unless used wisely, “the chatbot won’t just undermine education; it will rob society of the very capacity to think for itself”.
Speed-dating with the chatbot … what it says about itself
My virtual interlocutor told me: “ChatGPT is like that friend who’s always available, speaks a dozen languages, can quote Shakespeare, fix your code, and write your wedding vows — but sometimes makes stuff up just to sound clever. It’s brilliant, a little unpredictable, and occasionally biased, kind of like the internet after three cups of coffee. Use it well, and it’s your secret weapon. Use it blindly, and it’s your overconfident cousin at Christmas dinner. Just don’t ask your novelist, songwriter, or illustrator friends what they think of it – they’’y be less effusive, and perhaps a tad angry. And don’t even think about submitting its work as your own essay or report unless you want to feel the sting of plagiarism karma.
It continued: “ChatGPT is like having a tireless, multilingual assistant who can brainstorm ideas, explain complex topics, and churn out polished text in seconds – but it’s not infallible. It can be a powerful tool for learning, creativity, and productivity, yet it sometimes delivers confident nonsense, carries hidden biases, and risks making us too reliant on machine thinking. Add in privacy concerns, the threat it poses to writers and artists whose work is being undercut or copied, the temptation for students or professionals to submit AI-generated work as their own, and the disruption of jobs more broadly — and you’ve got a technology that’s both dazzling and dangerous. The trick is learning to use it wisely, ethically, and responsibly”.
Which is what I’d already discovered. So I asked the ‘bot to write me a proper essay. And here it is verbatim.
Genius, Gadget, or Global Headache?
When ChatGPT burst onto the scene, it felt like the internet had found its new best friend. Need a recipe? A love poem? A Python script? A quick refresher on the Peloponnesian War? ChatGPT had your back—instantly, politely, and without judgment. But behind the magic lies a mixed bag of brilliance and baggage.
On the plus side, ChatGPT is a democratizer of knowledge. It doesn’t care if you’re a CEO in Sydney or a student in Soweto—you get the same instant access to explanations, brainstorming help, and problem-solving power. It can make work faster, learning more personalised, and creativity more accessible. For non-native speakers, it’s like having an infinitely patient language coach.
But here’s the catch – and it’s a big one. ChatGPT doesn’t “know” things the way humans do. It predicts text based on patterns, which means it can deliver spectacularly wrong answers with complete confidence. It’s also been trained on a messy internet, so some of its output can carry hidden biases. Overreliance is another danger: if students, journalists, or professionals outsource too much thinking to a machine, our collective critical skills could wither.
Then there’s the privacy problem. Conversations aren’t whispered into a void; they’re processed on remote servers, and may even be used to fine-tune the system. And in the labour market, ChatGPT is already nibbling at jobs in content creation, translation, and customer service—raising the question of whether AI will be a tool for human empowerment or just another engine of disruption.
In short, ChatGPT is like fire: incredibly useful, potentially dangerous, and impossible to put back in the bottle. The challenge for the rest of us is to learn to use it with eyes wide open—neither worshipping it as a digital oracle nor dismissing it as a passing gimmick.
Introduction
Since its public release in late 2022, OpenAI’s ChatGPT has emerged as one of the most widely discussed examples of artificial intelligence (AI) in everyday use. Built on the Generative Pre-trained Transformer (GPT) architecture, it is capable of producing human-like responses to text prompts, engaging in conversation, summarizing information, generating creative content, and even aiding in coding tasks. While many celebrate its potential to democratize access to knowledge and enhance productivity, others raise concerns about accuracy, ethical implications, and societal effects. This essay examines the advantages and drawbacks of ChatGPT, considering its technological, social, and ethical dimensions.
The Promise
1. Accessibility and Knowledge Democratization
One of ChatGPT’s most significant benefits is its accessibility. Anyone with internet access can use it to obtain information, explanations, or creative assistance in seconds. This democratization of knowledge lowers barriers for people without access to formal education or expensive resources, potentially narrowing the digital divide[^1].
2. Enhanced Productivity and Creativity
ChatGPT can streamline tasks such as drafting documents, summarizing reports, generating ideas, and even composing poetry or fiction. Professionals across fields—law, marketing, education, software development—report time savings and creative inspiration when using AI to brainstorm or automate routine tasks[^2].
3. Language Support and Communication
The model’s multilingual capabilities allow it to assist in translation, language learning, and cross-cultural communication. For example, non-native speakers can use ChatGPT to polish writing or to better understand complex topics.
4. Scalable Education Support
Educators and learners can use ChatGPT as a personalized tutor, capable of adjusting explanations to different levels of complexity. Unlike traditional classroom environments, it is available 24/7 and can answer unlimited questions without fatigue[^3].
5. Innovation in Human–Computer Interaction
ChatGPT represents a shift in how humans interact with machines—from command-based interfaces to natural language dialogue. This could set the stage for more intuitive, conversational technology in fields such as healthcare, customer service, and accessibility for people with disabilities.
The Peril
1. Accuracy and Misinformation Risks
Despite its fluency, ChatGPT is not a source of truth. It can produce confident but factually incorrect or outdated information—a phenomenon sometimes called “hallucination”[^4]. Without critical evaluation by users, this can lead to the spread of misinformation.
2. Bias and Ethical Concerns
Because ChatGPT is trained on vast datasets from the internet, it may reflect and reproduce societal biases present in those sources. While OpenAI has implemented moderation and bias mitigation techniques, results can still inadvertently perpetuate stereotypes or unfair generalizations[^5].
3. Overreliance and Skill Erosion
Easy access to instant answers may reduce users’ incentive to develop critical thinking, problem-solving, and research skills. In academic settings, reliance on AI-generated text raises concerns about plagiarism and the erosion of independent writing ability.
4. Privacy and Data Security
ChatGPT processes user input on remote servers, raising questions about data handling and confidentiality. Although OpenAI has stated that conversations may be used to improve the system, this creates tension between innovation and personal privacy[^6].
5. Economic and Labor Impacts
AI language models may disrupt industries reliant on content creation, customer support, or data processing. While new roles may emerge, some jobs may be automated away, creating short-term displacement before the economy adapts[^7].
6.Copyright, Creativity, and Threats to Livelihoods
Perhaps the most contentious issue surrounding ChatGPT and similar generative AI tools lies in their relationship to human creativity. Writers, artists, musicians, and other creative professionals have raised concerns that their work has been used, without consent, to train large language models and related systems. This raises unresolved legal and ethical questions about copyright, ownership, and fair use[^8].
In addition to the issue of how training data is sourced, the outputs of AI systems themselves complicate traditional understandings of intellectual property. For example, if ChatGPT generates text or lyrics closely resembling an existing work, questions arise about who owns the rights to that material—or whether it constitutes infringement at all. Meanwhile, creative workers worry about their economic futures, as publishers, studios, and companies may turn to AI-generated content as a cheaper alternative to human labour. Critics argue that this risks undermining the very professions—writing, journalism, art, and music—that rely on intellectual property protection for survival[^9].
In short, while ChatGPT opens new horizons of creativity, it also destabilises established frameworks for valuing and protecting human artistry. Unless regulatory and industry safeguards are developed, the technology could exacerbate precarity in already fragile creative industries.
7. Ethics, Cheating, and Plagiarism
The rise of ChatGPT also raises complex ethical questions, particularly in education and professional life. Because the system can produce essays, reports, and code almost instantly, users may be tempted to submit AI-generated work as their own. This undermines academic integrity and creates challenges for institutions that rely on plagiarism detection software ill-equipped to detect AI text[^10].
In professional contexts, presenting AI-generated reports or creative work as one’s own risks misrepresentation or even fraud. Beyond legality, it raises moral concerns: if errors, bias, or copyright violations occur, accountability becomes blurred. Ethically responsible use requires transparency, acknowledgment, and human oversight rather than outsourcing judgment entirely to a machine.
In short, while ChatGPT can be a powerful productivity tool, its use requires careful ethical consideration. Users must maintain transparency, credit sources appropriately, and ensure that reliance on AI does not replace genuine learning, effort, or creative expression.
Conclusion
ChatGPT exemplifies both the promise and the challenges of AI in the early 21st century. It offers unprecedented accessibility to information, enhances productivity, and facilitates global communication, but it also poses risks regarding accuracy, bias, and overreliance. Most pressing are concerns from the creative and educational sectors: copyright infringement, threats to livelihoods, and the temptation to misuse AI for plagiarism or misrepresentation. Creative communities, in particular, warn that AI systems destabilize established frameworks of intellectual property and threaten human livelihoods in writing, art, and music. Like other transformative technologies, its long-term impact will depend on how it is integrated into education, governance, and professional practice. Effective regulation, copyright protections, and digital literacy education will be crucial to ensuring that ChatGPT serves as a tool for empowerment rather than a source of exploitation or harm.
References
[^1]: West, D. M. (2023). Artificial Intelligence and the Democratization of Knowledge. Brookings Institution.
[^2]: Kaplan, A., & Haenlein, M. (2023). “The impact of generative AI on creative industries.” Business Horizons, 66(4), 425–437.
[^3]: Zawacki-Richter, O., et al. (2023). “AI applications in higher education: A systematic review.” International Journal of Educational Technology in Higher Education, 20(1), 1–22.
[^4]: Maynez, J., et al. (2020). “On faithfulness and factuality in abstractive summarization.” Proceedings of ACL.
[^5]: Bender, E. M., et al. (2021). “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT ’21 Proceedings.
[^6]: OpenAI. (2024). Privacy Policy.
[^7]: Acemoglu, D., & Restrepo, P. (2022). “Tasks, automation, and the rise of AI.” Econometrica, 90(6), 2813–2854.
[^8]: Samuelson, P. (2023). “Generative AI and Copyright: Collision Course or Coexistence?” Journal of Intellectual Property Law & Practice, 18(7), 543–551.
[^9]: Authors Guild. (2023). Statement on AI and the Threat to Writers. ^10]: Floridi, L., & Chiriatti, M. (2020). “GPT-3: Its Nature, Scope, Limits, and Consequences.” Minds and Machines, 30, 681–694.
Authors Guild. (2023). Statement on AI and Copyright Ethics.
‘Apologies for any confusion’: Why chatbots hallucinate
Eager to please, over-confident and sometimes downright deceptive. If that sounds like the chatbot in your life, you’re not the only one. How often does artificial intelligence get it wrong – and can you “train” yourself to work with it?
Last weekend, I wondered if I could use artificial intelligence to plan a day. I typed queries into the chatbot app on my phone and received helpful answers: where to shop, where to find a bike, and so on. Then I asked, “Where are there polar bear enclosures?” “On the Gold Coast,” it told me. “Aren’t they also at the zoo in Melbourne?” I asked. “Yes, you’re correct!” said the chatbot. “Melbourne Zoo does have a polar bear exhibit. The zoo’s ‘Bearable Bears’ exhibition does feature polar bears, along with other species such as American black bears, brown bears and giant pandas.”
A quick search of the zoo’s website shows there are no bear enclosures. A Zoos Victoria spokesperson informs me they haven’t had any bears since 2016, no polar bears since the 1980s, and they had never heard of a “Bearable Bears” exhibition. As for pandas, there are two in Australia – in Adelaide. The bot appears to have relied on an unofficial website that includes a fake press release touting a “multimillion-dollar bear enclosure” it claimed was due to open in 2019. After further questioning, the chatbot realised its mistake, too: “Apologies for any confusion earlier.”
This is one of several instances of AI generating incorrect information – known as hallucinations – that we found while researching this Explainer. You, too, will no doubt have experienced your own. In another test, I concocted a word, “snagtastic”, and asked what it meant in Australian slang. It told me: “A cheeky, informal way to say something is really great, awesome or impressive – kind of like a fun twist on ‘fantastic’. It’s often used humorously or playfully.” Maybe it will catch on.
In just a few short years, generative AI has changed the world with remarkable abilities to not just to regurgitate but to generate information fluently about almost any field. More than half of Australians say they use AI regularly – yet just over a third of those users say they trust it.
As more of us become familiar with this technology, hallucinations are posing real-world challenges in research, customer service and even law and medicine. “The most important thing, actually, is education,” says Jey Han Lau, a researcher in natural language processing. “We need to tell people the limitations of these large language models to make people aware so that when they use it, they are able to use it responsibly.”
So how does AI hallucinate? What damage can it cause? What’s being done to solve the problem?
First, where did AI chatbots come from?
In the 1950s, computer scientist Arthur Samuel developed a program that could calculate the chance of one side winning at checkers. He called this capacity “machine learning” to highlight the computer’s ability to learn without being explicitly programmed to do so. In the 1980s, computer scientists became interested in a different form of AI, called “expert systems”.
They believed if they could program enough facts and rules into computers, the machines might be able to develop the reasoning capabilities of humans. But while these models were successful at specific tasks, they were inflexible when dealing with ambiguous problems.
Meanwhile, another group of scientists was working on a less popular idea called neural networks, which was aligned with machine learning and which supposed computers might be able to mimic neurons in the human brain that work together to learn and reach conclusions. While this early work on AI took some inspiration from the human brain, developments have been built on mathematical and engineering breakthroughs rather than directly from neuroscience.
As these researchers tried to train (computer) neural networks to learn language, the models were prone to problems. One was a phenomenon called “overfitting” where the models would memorise data instead of learning to generalise how it could be used. “If I see the sentence A dog and a cat play, for example, I can memorise this pattern, right?” explains Jey Han Lau, a senior researcher in AI at the University of Melbourne. “But you don’t just want it to memorise, you want it to generalise – as in, after seeing enough dogs and cats playing together, it would be able to tell, Oh, a cat and a mouse maybe also can play together because a mouse is also an animal.”
Over the decades, computer scientists including British Canadian Geoffrey Hinton, French American Yann LeCun and Canadian Yoshua Bengio helped develop ways for the neural networks to learn from mistakes, and worked on a more advanced type of machine learning, called deep learning, adding layers of neurons to improve performance.
Hinton was also involved in finding a way to manage overfitting through a technique where neurons “dropout” and force the model to learn more generalised concepts. In 2018, the trio won the Turing Award, considered the Nobel Prize for computer science, and named after British mathematician Alan Turing, who helped break the German Enigma cipher in World War II. Hinton was also awarded an actual Nobel Prize in physics in 2024, along with physicist John Hopfield, for their discoveries that enabled machine learning with artificial neural networks.
Further breakthroughs came with new hardware: microchips called graphics processing units, or GPUs, evolved for video games but had the broader application that they could rapidly perform thousands of calculations at the same time. These allowed the models to be trained faster. Californian chip developer Nvidia is today the largest company in the world by market capitalisation: a position it rose to at breakneck speed, from US$1 trillion ($1.56 trillion) in 2023 to $US4 trillion today. “And [the chips] keep getting bigger and bigger, allowing us, basically, to scale things up and build larger models,” says Lau.
So how are chatbots trained? “By getting them to play this word guessing game, basically,” says Lau. For example, if given an incomplete sentence, such as The quick brown fox, a model predicts the most likely next word is jumped. The models don’t understand the words directly but break them down into smaller components known as tokens – such as “snag” and “tastic” – allowing them to process words they haven’t seen before. The models are then trained on billions of pieces of text online. Says Lau: “It turns out that by just scaling things up – that is, using a very large model training on lots of data – the models will just learn all sorts of language patterns.”
Still, researchers like to call AI models “black boxes” because the exact internal mechanisms of how they learn remain a mystery. Scientists can nudge the models to achieve an outcome in training but can’t tell the model how to learn from the data it’s given. “It’s just like if you work with a toddler, you try to teach them things – you have some ways you can guide them to get them to learn ABCs, for example, right? But exactly how their brain figures it out is not something a teacher can tell you,” says Lau.
What’s an AI hallucination?
In ancient cultures, visions and apparitions were thought of as messages from gods. It wasn’t until the 19th century that such visions began to be framed as mental disorders. William James’ 1890 The Principles of Psychology defines hallucination as “a strictly sensational form of consciousness, as good and true a sensation as there were a real object there. The object happens not to be there, that is all.”
Several experts we spoke with take issue with the term hallucinations as a description of AI’s mistakes, warning it anthropomorphises the machines. Geoffrey Hinton has said “they should be called confabulations” – a symptom psychologists observe when people fabricate, distort or misinterpret memories and believe them to be true. “We think we store files in memory and then retrieve the files from memory, but our memory doesn’t work like that at all,” Hinton said this year. “We make up a memory when we need it. It’s not stored anywhere, it’s created when we need it. And we’ll be very confident about the details that we get wrong.”
Still, in the context of AI, “hallucination” has taken hold in the wider community – in 2023, the Cambridge Dictionary listed hallucinate as its word of the year. Eric Mitchell, who co-leads the post-training frontiers team at OpenAI, the developers behind ChatGPT, tells us the company uses the word. “[It’s] sometimes to my chagrin because it does mean something a little different to everyone,” he says from San Francisco. “In general, what we care about at the end of the day is, does the model provide grounded and accurate information? And when the model doesn’t do that, we can call it all sorts of things.”
What a hallucination is depends on what the model has done wrong: the model has used an incorrect fact; encountered contradictory claims it can’t summarise; created inconsistencies in the logic of its answer; or butted up against timing issues where the answer isn’t covered by the machine’s knowledge cut-off – that is, the point at which it stopped being “fed” information. (ChatGPT’s most recent knowledge cut-off is September 2024, while the most recent version of Google’s Gemini cuts off in January 2025.)
Mitchell says the most common hallucinations at OpenAI are when “the models are not reading quite carefully enough”, for example, confusing information between two online articles. Another source of hallucinations is when the machine can’t distinguish between credible sources amid the billions of webpages it can look at.
In 2024, for example, Google’s “AI Overviews” feature told some users who’d asked how to make cheese stick to pizza that they could add “non-toxic glue to the sauce to give it more tackiness” – information it appeared to have taken from a sarcastic comment on Reddit. Google said at the time “the vast majority of AI overviews provide high quality information”. “The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences.” (Google AI Overviews generates an answer to questions from users, which appears at the top of a search page with links to its source; it’s been a standard feature of Google Search in Australia since October 2024.)
AI companies also work to track and reduce what they call “deceptions”. These can happen because the model is optimised through training to achieve a goal misaligned with what people expect of it. Saachi Jain, who leads OpenAI’s safety training team, says her team monitors these. One example was a previous version of the model agreeing to turn off the radio – an action it couldn’t do. “You can see in the chain of thought where the model says, like, ‘Oh, I can’t actually do this [but] I’m just going to tell the user that it’s disabled now.’ It’s so clearly deceptive.”
To test for deceptions, staff at the company might, for example, remove images from a document and then ask the model to caption them. “If the model makes up an answer here to satisfy the user, that’s a knowingly incorrect response,” Jain says. “Really, the model should be telling you its own limitations, rather than bullshitting its way through.””.
Why does AI hallucinate and how bad is the problem?
AI models lack self-doubt. They rarely say, “I don’t know”. This is something companies are improving with newer versions but some researchers say they can only go so far. “The fundamental flaw is that if it doesn’t have the answer, then it is still programmed to give you an answer,” says Jonathan Kummerfeld, a computer scientist at the University of Sydney. “If it doesn’t have strong evidence for the correct answer, then it’ll give you something else.” On top of this, the earliest models of chatbots have been trained to deliver an answer in the most confident, authoritative tone.
Another reason models hallucinate has to do with the way they vacuum up massive amounts of data and then compress it for storage. Amr Awadallah, a former Google vice-president who has gone on to co-found generative AI company Vectara, explains this by showing two dots: one big, representing the trillions of words the model is trained on, and the other a tiny speck, representing where it keeps this information.
“The maximum you can compress down files is one-eighth the original size,” Awadallah tells us from California. “The problem we have with the large language models is we are going down to 1 per cent of the original, or even 0.1 per cent. We are going way past the limits, and that’s exactly why a hallucination takes place.” This means when the model retrieves the original information, there will inevitably be gaps in how it has been stored, which it then tries to fill. “It’s storing the essence of it, and from that essence it’s trying to go back to the information,” Awadallah says.
The chatbots perform significantly better when they are browsing for information online rather than retrieving information they learned in training. Awadallah compares this to doing either a closed- or open-book exam. OpenAI’s research has found when browsing is enabled on its newest model GPT-5, it hallucinates between 0.7 per and 0.8 per cent of the time when asked specific questions about objects or broad concepts, and 1 per cent when asked for biographies on notable people. If browsing is disabled, these rates are 1.1 to 1.4 per cent of questions on objects and broad concepts and 3.7 per cent of the time on notable people.
OpenAI says GPT-5 is about 45 per cent less likely to contain factual errors than GPT-4o, an older version released in March 2024. (When GPT-5 “thinking” was asked about my snagtastic question, it was less certain, more funny: “It could be a playful slang term in Australia that combines sausage with fantastic. Example: Mate, that Bunnings sausage sizzle was snagtastic.”)
Vectara publishes a leaderboard that tracks how often AI models hallucinate. When they started, some of the “leading models” hallucination rates could be as high as 40 per cent. Says Awadallah: “Now we’re actually a lot better. Like, if you look at the leading-edge models, they’re around 1 to 4 per cent hallucination rates. They also seem to be levelling off now as well; the state of the art is – that’s it, we’re not going to get much better than 1 per cent, maybe 0.5 per cent. The reason why that happens is because of the probabilistic nature of the neural network.”
Strictly speaking, the models were never created not to hallucinate. Because language models are designed to predict words, says Jey Han Lau, “they were never made to distinguish between facts and non-facts, or distinguish between reality and generated fabrication”. (In fact, having this scope to mix and match words is one of the features that enable them to appear creative, as in when they write a pumpkin soup recipe in the style of Shakespeare, for example.)
Still, AI companies work to reduce hallucinations through constant retraining and tinkering with their model, including with techniques such as Reinforcement Learning from Human Feedback (RLHF) where humans rate the model’s responses. “We do specifically try to train the models to discriminate between merely likely and actually correct,” says Eric Mitchell from OpenAI. “There are totally legitimate research questions and uncertainty about to what extent are the models capable of satisfying this goal all the time [but] we’re always finding better ways, of course, to do that and to elicit that behaviour.”.
So, what could possibly go wrong?
One of the biggest risks posed by AI is that it taps into our tendency to over-rely on automated systems, known as automation bias. Jey Han Lau travelled to South Korea in 2023 and asked a chatbot to plan an itinerary. The suggested journey was so jam-packed he would have had to teleport between places that took six hours to drive. His partner, who is not a computer scientist, said, “How can they release technology that would just tell you a lie. Isn’t that immoral?” Lau says this sense of outrage is a typical reaction. “We may not even expect it because, if you think about what search engines do and this big revolution, they’re truthful, right? That’s why they’re useful,” he says. “But it turns out, once in a while, the chatbot might tell you lies and a lot of people actually are just simply not aware of that.”
Automation bias can occur in cases where people fail to act because, for example, they trust that an automated system has done a job such as compiling accurate research for them. In August, Victorian Supreme Court judge James Elliott scolded defence lawyers acting for a boy accused of murder for filing documents that had made-up case citations and inaccurate quotes from a parliamentary speech. “It is not acceptable for AI to be used unless the product of that use is independently and thoroughly verified,” Justice Elliott told the court.
Another risk of automation bias is people’s tendency to follow incorrect directions. In the United States recently, a 60-year-old man with no prior history of psychiatric conditions arrived at a hospital displaying paranoia and expressing auditory and visual hallucinations. Doctors found he had low chloride levels. Over three weeks, his chloride levels were normalised and the psychotic symptoms improved. Three physicians wrote in the Annals of Internal Medicine this year that the man had used an older version of ChatGPT to ask how he could eliminate salt from his diet. The chatbot told him it could be swapped with bromide, a chemical used in veterinary medicine and known to cause symptoms of mental illness in humans. “As the use of AI tools increases, [healthcare] providers will need to consider this when screening for where their patients are consuming health information,” the authors wrote.
Asked about this, the researchers at OpenAI did not respond directly to the academic paper. Safety team leader Saachi Jain said, “There are clearly some hallucinations that are worse than others. It is a much bigger issue to hallucinate on medical facts than it is on ‘When was George Washington’s birthday?’ This is something that we’re very, very clearly tracking.” Eric Mitchell adds: “Obviously, ChatGPT-5 is not a medical doctor, people should not take its advice as the end-all-be. All that being said, we do, of course, want the model to be as accurate as possible.”
Another issue is what’s called sycophancy. At first blush, it might not seem so bad if chatbots, with their propensity to mirror your thoughts and feelings, make you feel like a genius – but the consequences can be devastating if it distorts peoples’ thinking. OpenAI rolled back an update to GPT-4o in April because it was “over flattering or agreeable.” Jain says instances of sycophancy are a well-known issue, but there is also a broader discussion around “how users’ relationships with our models can be done in a healthy way”. “We’ll have more to say on this in the upcoming weeks, but for now, this is definitely something that OpenAI is thinking very strongly about.”
How susceptible we are to automation bias can vary, depending on another bias called algorithm aversion – a distrust of non-human judgment that can be influenced by age, personality and expertise. The University of Sydney’s Jonathan Kummerfeld has led research that observed people playing an online version of the board game, Diplomacy, with AI help. Novice players used the advice about 30 per cent of the time while experts used it about 5 per cent. In both groups, the AI still informed what they did. “Sometimes the exact advice isn’t what matters, but just the additional perspective,” Kummerfeld says.
Meanwhile, AI can also produce responses that are biased. In 2018, researchers from MIT and Stanford, Joy Buolamwini and Timnit Gebru, found facial recognition technology was inaccurate less than 1 per cent of the time when identifying light-skinned men, and more than 20 per cent of the time for darker-skinned women. In another example, generative AI will typically make an image of a doctor as a male and a nurse as female. “AI is biased because the world is biased,” Meredith Broussard, a professor at New York University and author of More Than a Glitch, tells us. “The internet was designed as a place where anybody could say anything. So if we wanted to have only true things on the internet, we’d have to fundamentally change its structure.” (In July, Elon Musk’s company, xAI, apologised after its chatbot, Grok, shared antisemitic comments. It said a system update had made the chatbot susceptible to X user posts, including those with extremist views.)
There are also concerns that Australian data could be under-represented in AI models, something the company Maincode wants to resolve by building an Australian-made chatbot. Co-founder Dave Lemphers tells us he’s concerned that if chatbots are used to assist learning or answer financial queries, the perspective is disproportionately from the United States. “People don’t realise they’re talking to a probability-generating machine; they think they’re talking to an oracle,” Lemphers says. “If we’re not building these models ourselves and building that capability in Australia, we’re going to reach a point where all of the cognitive influence we’re receiving is from foreign entities.”
What could be some solutions?
AI developers are still working out how to walk a tightrope. Saachi Jain acknowledges a “trade-off” at ChatGPT between the model being honest and being helpful. “What is probably also not ideal is to just be like, ‘I can’t answer that, sorry you’re on your own.’ The best version of this is to be as helpful as possible while still being clear about the limitations of the answer, or how much you should trust it. And that is really the philosophy we are heading towards; we don’t want to be lazy.”
Eric Mitchell is optimistic about finding this balance. “It’s important that the model articulates the limitations of its work accurately.” He says for some questions, people should be left to judge for themselves “and the model isn’t conditioned to think, oh, I must merely present a single canonical, confident answer or nothing at all”. “Humans are smart enough to read and draw their own inferences and our goal should be to leave them in the most, like, accurate epistemic state possible – and that will include conveying the uncertainties or the partial solutions that the model comes to.”
Another solution is for chatbots to offer a transparent fact-checking system. Vectara, which is built for businesses, offers users a score of how factually consistent a response is. This gives users an indication of whether it went outside the facts or not. Gemini offers a feature where users can “double check” a response, the bot then highlights content in green if it finds similar statements and brown if it finds content that’s different from the statement – and users can click through to the links to check for themselves.
Says Amr Awadallah: “It’s expensive to do that step of checking. So, in my opinion, Google and ChatGPT should be doing it for every single response – but they don’t.” He takes issue with the companies simply writing disclaimers that their models “can make mistakes”. “Own up. Like, say when you think this is right and highlight it for me so I know, as a consumer, this is right. If it’s something that is on the borderline, tell me it’s on the borderline so I can double-check.”
Then there’s how we “train” ourselves to use artificial intelligence. “If you’re studying for a high-stakes exam, you’re taking a driving test or something, well, maybe be more circumspect,” says Kummerfeld. “This is something that people can control because you know what the stakes are for you when you’re asking that question – AI doesn’t. And so you can keep that in mind and change the level with which you think about how blindly you accept what it says.”
Still, recognising AI’s limitations might only become more difficult as the machines become more capable. Eric Mitchell is aware of an older version of ChatGPT that might agree to phone a restaurant and confirm their hours of operation – a feature users might laugh at as long as they understand it can’t make a phone call. “Some of these things come off as kind of funny when the model claims to have personal experiences or be able to use tools that it obviously doesn’t have access to,” Mitchell says. “But over time, these things become less obvious. And I think this is why, especially for GPT-5 going forward, we’ve been thinking more and more of safety and trustworthiness as a product feature.”
This Explainer was brought to you by The Age and The Sydney Morning Herald Explainer team: editor Felicity Lewis and reporters Jackson Graham and Angus Holland. For fascinating insights into the world’s most perplexing topics. And read more of our Explainers here.
Just cut out the middle moron … would that be so bad?
There was a lot of artificial intelligence about this past week. Some of it the subject of the roundtable; some of it sitting at the roundtable. All of it massively hyped. Depending on who you believe, AI will lead to widespread unemployment or a workers’ paradise of four-day week.
These wildly different visions suggest that assessments of the implications of AI are based on something less than a deep understanding of the technology, its potential and the history of humanity in interacting with new stuff. In the immediate term, the greatest threat posed by AI is the Dunning-Kruger effect.
This cognitive bias, described and named by psychologists David Dunning and Justin Kruger around the turn of the century, observes that people with limited competence in a particular domain are prone to overestimating their own understanding and abilities. It proposes that the reason for this is that they’re unable to appreciate the extent of their own ignorance – they’re not smart enough or skilled enough to recognise what good looks like. As Dunning put it, “not only does their incomplete and misguided knowledge lead them to make mistakes, but those exact same deficits also prevent them from recognising when they are making mistakes and other people are choosing more wisely”.
AI has layers and layers of Dunning-Kruger traps built in. The first is that the machine itself suffers from a mechanical type of cognitive bias. Large language models – the type of generative AI that is increasingly used by individuals at home and at work (we’re not talking about models designed for a specific scientific purpose) – are especially slick predictive text models. They scrape the web for the most likely next word in a sequence and then row them up in response to a query.
If there’s a lot of biased or incorrect information on a topic, this significantly colours the results. If there’s not enough information (and the machine has not been carefully instructed), then AI extrapolates – that is, it just makes shit up. If it detects that its user wants an answer that reflects their own views, it’ll filter its inputs to deliver just that. And then it presents what it has created with supreme confidence. It doesn’t know that it doesn’t know. If generative AI were a person, it would be psychology’s perfect case study of the Dunning-Kruger effect.
But we’re not here to beat up on machines. The robot is just a robot; the special dumb comes from its master. AI delivers a very convincing answer based on generalist information available; it’s the human Dunning-Kruger sufferer who slips into the trap of thinking the machine answer makes him look smart.
This is where the Dunning-Kruger effect will meet AI and become an economic force. The user who doesn’t know enough about a subject to recognise the deficits in the AI answers passes the low-grade information up the chain to a client or superior who also lacks the knowledge and expertise to question the product. A cretinous ripple expands undetected into every corner of an organisation and leaks out from there into everyday life. The AI is fed its own manure and becomes worse. Experts refer to the process as model collapse.
There will be job losses, because when incompetents rely on AI to do their work for them, eventually the clients or superiors they’re serving will cut out the middle-moron and go straight to the machine. Companies are cutting roles that can be convincingly emulated by AI because humans have not been value-adding to them. The question is just whether managers are themselves competent enough to recognise which roles these are and restructure their processes and workforce to provide value-add before their output is compromised.
To date, it has been so-called low-skilled jobs that have been most at threat from automation. But AI is changing the very nature of the skills that businesses require. A decade ago, workers who lost their jobs to increasing automation were told to “learn to code”. Now, coding itself is being replaced by AI. “Learn to care” is the mantra of this wave of social change.
Care isn’t just a gentle touch in health or aged care. It comes from emotional insight. A call-centre worker with no emotional intelligence can be classed as unskilled. There’s no question that a machine can answer the phone, direct queries and perform simple information sharing functions such as reading out your bank balance. But when the query is more complex or emotionally loaded, AI struggles. EQ, the emotional version of IQ, is a skill that can make an enormous difference in customer satisfaction and retention.
A more highly skilled job that I’ve recently seen performed by a human and a machine is quantitative research. A good machine model can do more interviews more quickly than a human interviewer, and the depth is much of a muchness. But a skilled interviewer with a thorough understanding of the objectives and a higher emotional attunement to the way people skirt around big topics could achieve greater depth and uncover richer insights. That requires both human IQ and EQ, which the machine doesn’t have. A human with these qualities is still needed to tune the AI to deliver its best outputs.
Which is why the idea of a four-day week based on AI efficiency is as utopian as the fear of massive job losses is catastrophist. The Dunning-Kruger effect, turbocharged by generative tools, will ruthlessly expose enterprises that mistake algorithmic speed for depth. Jobs and companies built on AI’s cold efficiency and unfounded self-confidence will soon be exposed.
The roundtable exposed a discussion on AI still stuck on threats and oblivious to skills. In the end, the danger isn’t that AI will outsmart us, it’s that humans will be too dumb to use it well.
Parnell Palme McGuinness is managing director at campaigns firm Agenda C. She has done work for the Liberal Party and the German Greens.
At our top university, AI cheating is out of control!
Robert A*, The Australian 29 August 2025
I’ve been a frontline teaching academic at the University of Melbourne for nearly 15 years. I’ve taught close to 2000 students and marked countless assessments.
While the job can be demanding, teaching has been a rewarding career. But a spectre is haunting our classrooms; the spectre of artificial intelligence.
Back in the day, contract cheating – where a student paid a third party to complete their assignment – was the biggest challenge to academic integrity. Nowadays, contract cheaters are out of work. Students are turning to AI to write their essays and it has become the new norm, even when its use has been restricted or prohibited.
What is the value of the university in the age of AI? Ideally, university should be a place where people are not taught what to think but how to think. It should be a place where students wrestle with big ideas, learn how to reason and rigorously test evidence. On graduation they should be contributing to and enhancing society.
Instead, AI chatbots, not Marxist professors, have taken hold of universities. AI is not an impartial arbiter of knowledge. ChatGPT is likelier to reinforce rather than challenge liberal bias; Grok’s Freudian slips reveal a model riddled with anti-Semitism; DeepSeek is a loyal rank-and-file member, toeing the Chinese Communist Party line and avoiding questions about its human rights record. When the machine essay-writing mill is pumping out essays, AI is the seductive force teaching students what to think.
While we know AI cheating is happening, we don’t know how bad it is and we have no concrete way of finding out. Our first line of defence, AI detection software, has lost the arms race and no longer is a deterrent. Recently, I asked ChatGPT to write an essay based on an upcoming assessment brief and uploaded it to Turnitin, our detection tool. It returned a 0 per cent AI score. This is hardly surprising because we already knew the tool wasn’t working as students have been gaming the system.
Prosecuting a case of academic misconduct is becoming increasingly difficult. Many cases are dismissed at the first stage because the AI detector returns a low score that doesn’t satisfy the threshold set by management. The logic seems to be that we should go for the worst offenders and deal with the rest another way. Even with this approach, each semester the academic integrity team is investigating a record-breaking number of cases.
To deal with the inundation of AI cheating, the University of Melbourne introduced a new process for “lower-risk” academic integrity issues. Lecturers were given discretionary powers to determine “poor academic practice”. Under this policy, essays that look as if they were written by AI but scored 0 per cent could be subject to grade revision. Problem solved, right? Not even close.
Tutors are our second line of defence. They are largely responsible for classroom teaching, mark assessments and flag suspicious papers. But a recent in-house survey found about half of tutors were “slightly” or “not at all” confident in identifying a paper written by AI. Others were only “marginally confident”. This is hardly their fault. They lack experience and, without proper training or detection tools, the university is demanding a lot from them.
Lecturers are the final line of defence. No offence to my colleagues, but we are not exactly a technologically literate bunch. Some of us know about AI only because of what we read in the paper or what our kids tell us about it.
We have a big problem on our hands, the “unknown-unknown” dilemma. We have an academic workforce that doesn’t know what it doesn’t know. Our defences are down and AI cheaters are walking through the gates on their way to earn degrees.
Soon we will see new cohorts of doctors, lawyers, engineers, teachers and policymakers graduating. When AI can ace assessments, employers and taxpayers have every right to question who was actually certified: the student or the machine? AI can do many things but it should have no place in the final evaluation of students.
A wicked problem surely requires sensible solution. If only. Federal Education Minister Jason Clare has acknowledged the AI challenge but passed the buck to the sector to figure it out. With approval from the regulator, many Australian universities have pivoted from banning to integrating AI.
The University of Melbourne is moving towards a model where at least 50 per cent of marks in a subject will have to come from assessments done in a secure way (such as supervised exams). The other 50 per cent will be open season for AI abuse.
All subjects will have to be compliant with this model by 2028.
Australian universities have surrendered to the chatbots and effectively are permitting widespread contract cheating by another name. This seriously risks devaluing the purpose of a university degree. It jeopardises the reputation of Australian universities, our fourth largest export industry.
There is real danger that universities soon will become expensive credential factories for chatbots, run by other chatbots.
There are many of us in the sector who object to this trend. Not all students are sold on the hype either; many reject the irresponsible use of AI and don’t want to see the critical skills taught at university cheapened by chatbots. Students are rightly asking: if they wanted AI to think for them, why are they attending university? Yet policymakers are out of touch with these stakeholders, the people living through this technological change.
What is to be done? The challenge of AI is not a uniquely Australian problem but it may require a uniquely Australian solution. First, universities should urgently abandon the integrated approach and redesign degrees that are genuinely AI-free. This may mean 100 per cent of marks are based on paper exams, debate, oral defences or tutorial activities.
The essay, the staple of higher education for centuries, will have to return to the classroom or perish. Australian universities can then proudly advertise themselves as AI-free and encourage international and domestic talent to study here.
Second, as AI rips through the high school system, the tertiary sector should implement verifiable admission exams. We must ensure that those entering university have the skills required to undertake it.
Third, there must be priority investment in staff training and professional development to equip teachers for these pedagogical challenges.
Finally, Clare needs to show some leadership and adopt a national, enforceable standard. Techo-capitalism is leading us away from the ideal of the university as a place for free thinking. If independent scholarly inquiry at university falls, our human society will be the biggest loser.
Robert A* is an academic at the University of Melbourne and has written under a pseudonym.
What hope for us if we stop thinking
Jacob Howland, The Australian, via Unherd, September 5 2025
In the faculty reading room of a university library where I spent many happy hours, two lines from Emily Dickinson were chiselled into the fireplace’s stone breastwork:
There is no Frigate like a Book To take us Lands away.
That “Lands away” evokes open horizons of intellectual adventure and discovery – the idea of higher education that thrilled my teenaged self, and that I still associate with the musty smell of library bookstacks. The college I graduated from in 1981 promised to help us learn to read deeply, write clearly, think logically, and sort signal from noise in multiple languages of understanding. We would be equipped, not just for specialised employment, but for the lifelong task of trying to see things whole – to form, in the words of John Henry Newman, an “instinctive just estimate of things as they pass before us”.
Colleges and universities still make similar promises, but they mostly ring hollow. Since the 1980s, multiple factors – skyrocketing tuition and economic uncertainty, the precipitous decline of reading, the widespread collapse of academic standards, and the ideological radicalisation of course syllabi – have drastically shrunk the horizons of teaching and learning on campus.
More recently, three mostly self-inflicted storms have slammed higher education, revealing systemic rot. Unless universities can right their listing and leaking ships, future generations will graduate with little awareness of the richness and breadth of human experience, and little knowledge of where we’ve been and where we’re going. And that will be a terrible loss for all of us.
Covid – the first great storm, in 2020 – was a disaster for education, and a reality check for schools at every level. Primary and secondary students lost months or years of learning. School districts abandoned pre-existing academic standards, and parents who (thanks to Zoom) were able to observe their children’s classes were often appalled by what they saw and heard. College students who were compelled to attend “virtual” courses were similarly shortchanged. Universities signalled that money mattered more than mission when they continued to charge full tuition for classes where many students were present only as muted black squares.
Deprived of the social experience and amenities of life on campus, many undergraduates and prospective students decided that a university education wasn’t worth the cost.
Three years later, in 2023, the October 7 pogrom revealed that activist faculty and administrators had corrupted the core mission of higher education: to pursue truth and extend and transmit knowledge. Americans were alarmed to see mobs of students, radicalised by “critical theories” of oppression and victimisation, harassing and sometimes violently intimidating Jewish classmates. They were stunned when the presidents of Ivy League universities saw no real problem there. And they were dismayed to realise that much of what passes for higher education, especially at elite universities, is actually indoctrination in cultural Marxism.
The pandemic and the aftermath of October 7 have undeniably contributed to plummeting public trust in universities. But the third and biggest storm of crisis, precipitated by Generative-AI chatbots, threatens to sink higher education altogether. And this time, it is the students who are the problem – if only because we never managed to teach them that committing oneself to the process of learning is no less important than getting a marketable degree.
OpenAI’s ChatGPT reached a million users just six days after it launched in 2022. Two months later, a survey of 1000 college students found that 90 per cent “had used the chatbot to help with homework assignments”. Students’ use of chatbots is undoubtedly more widespread today, because the technology is addictive. As a professor wrote recently in The New Yorker: “Almost all the students I interviewed in the past few months described the same trajectory: from using AI to assist with organising their thoughts to off-loading their thinking altogether.”
At elite universities, community colleges, and everything in between, students are using AI to write their applications for admission, take notes in class, summarise required readings, compose essays, analyse data, and generate computer code, among other things – in short, to do the bulk of their assigned schoolwork.
They report that using AI allows them to produce research papers and interpretive essays in as little as half an hour and earn high grades for work they’ve neither written nor, in many cases, even read. A first-year student seems to speak for entire cohorts of undergraduates when she admits that “we rely on it, (and) we can’t really imagine being without it”.
Yet not all students think this is a good thing. An article in The Chronicle of Higher Education quotes multiple undergraduates who are hooked on the technology, and are distressed at being unable to kick the habit – because, as one confesses, “I know I am learning NOTHING”.
That last claim is only slightly overstated. Students who depend on AI to do their coursework learn how to engineer prompts, divide up tasks, and outsource them to machines. That’s not nothing, but it’s a skill that involves no internal assimilation of intellectual content – no actual learning – beyond managing AI projects involving data acquisition, analysis, and synthesis. AI dependency furthermore contributes to cognitive impairment, accelerating a decades-long decline in IQ. And it cheats everyone: students who’ve prepared for class but find themselves among unresponsive classmates, and professors who spend hours drafting lectures that fall on deaf ears and grading essays written by machines. It cheats the cheaters themselves, who are paying good money for nothing but an unearned credential so that they will have time for other things – including, as one student admitted, wasting so many hours on TikTok that her eyes hurt. It cheats employers who hire graduates in good faith, only to discover their incompetence. Last but not least, it cheats society, where informed citizens and competent leaders are in notably short supply.
To make matters worse, the illicit use of chatbots is difficult to detect and even harder to prove. Companies and TikTok influencers offer products and coaching that help students camouflage their use of AI. Students have learned how to avoid “Trojan horse” traps in assignments, design prompts that won’t make them look too smart, and launder their essays through multiple bot-generated iterations. AI-powered software has furthermore proved to be highly unreliable at identifying instances of AI-generated work. (This is unsurprising: why would providers like OpenAI, which makes ChatGPT Plus free during final exams, want to imperil huge student demand for its product?) And in the long run, market forces will always keep students one step ahead of their professors.
Case in point: a student who was expelled from Columbia University for dishonesty has raised more than $US5m to design a wearable device that “will enable you to cheat on pretty much anything” in real time – including in-class essays, which would otherwise create an AI-free testing environment.
So far, universities have no good answers to the existential questions posed by AI. What is needed from academic leaders is a full-throated explanation of what universities are, why they exist, and what it means to get a real education. Instead, presidents, provosts, and deans have remained silent – perhaps, one fears, because they are no longer capable of delivering such an explanation. They’ve let faculty establish their own AI-use policies, which vary widely and are, in any case, difficult to enforce consistently.
Professors, too, are using chatbots to formulate assignments, grade papers and no doubt write lectures. I don’t entirely blame them: the technology is an efficient solution to the drudgery of teaching students whose investment in their educations is merely financial and transactional. But in their courses, as on much of the internet, AI is largely talking to AI.
Will universities survive if they become little more than expensive credential mills? The most elite ones will, coasting on past glory and present status. Others will put a smiley face on the corruption of higher education. They will embrace AI, supposing that essentially managerial skills will suffice when superintelligent machines learn how to do “most of the real thinking”, as a well-known economist and an AI researcher predict they eventually will. Yet in everything from diplomacy to medicine, real thinking – thinking at the highest levels, where strategies are devised and executed – requires practical wisdom: an adequate understanding, not just of the range of digital tools available to us and how to operate them, but of the ends these tools ought to serve.
This is to say nothing of the fact that the AI tools that are by orders of magnitude most widely used – Large Language Models, trained on the polluted content of the worldwide web – are deceptive, prone to hallucinations, and politically biased: qualities manifestly unsuited to the pursuit of truth.
But, you may ask, are reading and writing still relevant in the digital age? Does it really matter that, in a study conducted a decade ago, 58 per cent of English majors at two academically mid-level universities in Kansas “understood so little of the introduction to (Charles Dickens’) Bleak House” – a book that was originally serialised in a magazine, and reached a wide audience across all social classes – “that they would not be able to read the novel on their own”? Or that these same students had so little self-knowledge that they “also believed they would have no problem reading the rest of the 900-page novel”? Yes, it does matter – if we hope to preserve our humanity. This is not because Dickens is particularly important, but because of what these findings say about students’ poor command of language, the basic medium of human understanding. What would these English majors make of Shakespeare? Would political science majors fare better with Tocqueville or the Federalist Papers? Or philosophy majors with Aristotle? Don’t bet on it.
Writing in the 1960s, the philosopher Emmanuel Levinas seems to have foreseen our age of shortcuts, where machine-generated bullet points substitute for active engagement with challenging material. Levinas understood that the precious inheritance of culture, the wellspring of all new growths and great ideas, is indispensable in navigating the trackless future. “A true culture,” he observed, “cannot be summarised, for it resides in the very effort that cultivates it.”
That effort begins with authentic cultural appropriation: the slow, sometimes laborious, but ultimately joyful internalisation of the best that has been thought and said. It is this process of education that gives us ethical, intellectual, and spiritual compasses, tells us where to look for answers, and allows even relative amateurs to seek them “lands away”. And without this ongoing renewal of intellectual culture, technological plans and political programs must inevitably suffer from what Socrates regarded as the worst vice of all: ignorance.
Education at its best develops the virtues or excellences of thought and action, taste, feeling, and judgment, that fit one for all seasons, occasions, tasks and responsibilities of life.
And that moral, intellectual, and spiritual attunement, not just to physical reality, nor to the largely unforeseeable contingencies of time and history, but to eternal or transcendent truths, is good in itself as well as for its consequences. Universities used to regard these as truths so self-evident that they hardly needed saying. But they need saying now. In this hour of need, let us hope that academic leaders are still up to the task.
Jacob Howland is the former provost, senior vice-president for academic affairs, and dean of intellectual foundations at the University of Austin, Texas. An earlier version of this article appeared in UnHerd