Another threat to our news industry: AI slop

AI slop – the 2025 word of the year in several prominent dictionaries – is lazy and often misleading online content generated by artificial intelligence. It presents a major threat to legitimate news outlets, including those in New Zealand.

There is a high risk that the loss of trust that inevitably follows users’ realisation that they have been taken for a ride will widen into a belief that ‘you can’t trust news: Full stop’

Yesterday both RNZ and TVNZ carried stories about the emergence of AI slop in this country, primarily on what purported to be a news site, but which carried no original reporting and distorted visual reality.

They interrogated the site – called NZ News Hub (clearly a take-off of TV3’s Newshub which closed in 2024) – and found numerous AI-generated images that were not identified as such. The broadcasters also found the site consistently dramatised natural disasters and emergencies beyond what had actually occurred.

For example, an image of the tourist boat that foundered in Akaroa Harbour was doctored to suggest there were far more passengers aboard than was actually the case. Photographs of the recent East Coast slips were altered to indicate greater damage and wrecked cars and houses were added to the Mt Maunganui slip.

RNZ’s study found NZ News Hub had been publishing dozens of posts a week drawing on material written by bona fide New Zealand media (including RNZ itself) but adding AI-generated images and video, and sometimes embellishing stories scraped from news sources. It also found many AI slop sites are run from foreign locations including Malaysia (where there are numerous content farms) and Vietnam.

It uses still images to create fake videos. For example, one drew on a still image of a 15-year-old victim of the Mt Maunganui landslide and appeared to show her dancing enthusiastically. In other cases, still images recognised by a news outlet were used to produce AI-generated video purporting to show Prime Minister Christopher Luxon at Waitangi and discussing the November election with Finance Minister Nicola Willis.

 A disturbing fact is the number of followers that NZ News Hub has – more than 4700. Its ‘reports’ can generate up to a thousand likes. I find it hard to imagine that these are worldly-wise people who know they are being duped with AI slop. They are likely to be taking the posts at face-value. My fear is that when they realise they are being duped, their anger and embarrassment will not be limited to the slop-makers but, through a sort of defensive mechanism, be extended to all media – “you can’t trust any of them”.

Perversely, revelations about AI slop in legitimate news media may also erode trust. Exposing these bogus sites is manifestly in the public interest. However, for people with already-diminished trust in media, it can lead to illogical conclusions fed by confirmation bias. If a photograph or video can be AI-manipulated by slop merchants, what’s to stop real news outlets from doing the same?

Those news outlets make no secret of the fact that they use artificial intelligence as a newsroom tool. Last week, the co-director of AUT’s Journalism Media and Democracy research centre (JMAD), Associate Professor Merja Myllylahti, published what she described as a baseline report on the use of AI in New Zealand newsrooms. She found AI tools are widely used in day-to-day news and content production, particularly by private sector commercial media.

All of our mainstream media have adopted individual codes of practice on the use of AI and all of them ban the AI creation of news images and video. Human oversight plays a critical role in each code.

Nonetheless, her report shows that artificial intelligence has replaced human in some parts of the news production cycle, albeit with real people overseeing proceedings. That oversight is vital, because JMAD has found that 60 per cent of New Zealanders are uncomfortable about news produced mostly by artificial intelligence with only some human oversight. The comfort level improves when news is produced by journalists with some assistance from AI. But how does the consumer know? The JMAD reports finds there is still little information on how it is used in everyday news gathering, production and distribution.

Dr Myllylahti identifies a disturbing attitude. Although New Zealand’s main media corporations have published their principles and ethics of AI use, the principles themselves do not reveal much about how AI assistants and tools are used in everyday work

“While AI principles and ethics call for transparency and openness, and labelling on AI content, some editors in New Zealand believe that in terms of tagging AI content ‘the ship has sailed’,” she says.  “Some of them argue that as ChatGPT is used mainly as a “replacement for Google search”, it is not necessary to tell the audiences how the AI is used in the process.”

That is dangerous thinking.

Only by employing a fully transparent approach can news media persuade the audience that their use of AI is ethical and trustworthy.

The JMAD report identifies the uses that NZME, Stuff, RNZ, and TVNZ make of AI and sets them out in a table (below). Nothing suggests improper use and certainly no tolerance of AI slop standards.

The fact that this is the first time I have seen such a table suggests that there is insufficient disclosure, insufficient transparency.

Artificial intelligence is fast becoming ubiquitous. It aids farmers to monitor livestock and crops, manages traffic flow, and makes your weekly shop more efficient. However, it is still not widely trusted. Perhaps we are hard-wired to regard robots as existential threats.

Distrust is fed by such nefarious uses as Grok, the AI chatbot that has allowed sexualised fake images to be distributed on X. It is also fed by the flood of AI slop that is not limited to news. It extends to harmless content such as cat videos (confession: I am very fond of cat videos) that renders felines and other creatures as too cute, too empathetic, too demanding, and makes reality hard to find. And it is that disconnect with reality that troubles many of us.

Trust in news has been deteriorating for a number of reasons. It is a complex potion that is partly society-wide and partly due to the way the news industry has conducted itself. It does not need that potion further complicated by its use of AI and charges that it is misusing the technology.

The way to avoid that is by being totally transparent about when and how AI is being used. None of the four main players does well in that regard. The AI codes of practice are not easy to find and disclosure over AI use is rarely evident on stories (the exception being BusinessDesk, which has made full disclosure of its AI processing of material from the New Zealand Stock Exchange). An occasional story about the newsroom’s new tool does not cut it.

AI disclosures should become routine, something the audience expects to see or hear. Their use would be enhanced by the adoption of an industry-wide recognition system with symbols and phrases applied in common. The effort will be less effective if each outlet decides to go-it-alone and apply slightly different standards and disclose them in different ways.

In time, disclosure may be reduced to simple symbols. In the Age of Emoji it should not be beyond the talents the news industry to design telling symbols. Keep it human. Don’t hand the task to AI.

  • Yes, this is a disclosure: The image at the top of this week’s commentary was created by AI (ChatGPT).

2 thoughts on “Another threat to our news industry: AI slop

  1. Gavin Ellis – Gavin Ellis is a media consultant, commentator and researcher. He holds a doctorate in political studies. A former editor-in-chief of the New Zealand Herald, he is the author of Trust Ownership and the Future of News: Media Moguls and White Knights (London, Palgrave) and Complacent Nation (Wellington, BWB Texts). His consultancy clients include media organisations and government ministries. His Tuesday Commentary on media matters appears weekly on his site www.whiteknightnews.com
    Gavin Ellis says:

    From Jim Tucker:
    Oh dear. I have a disclosure to make. When I was a cadet reporter back in 1966, one of my jobs was to draw the daily weather map, aided by a set of numbers that came from the weather office via a teleprinter. Some days, the numbers didn’t come through in time to meet our first edition deadline at the Taranaki Herald, so I’d simply move yesterday’s high or cold front across to the right and hope for the best. It didn’t always work. One day, a commercial fisherman told me how the “bloody weather map” misled him to the extent he got caught in a storm out there in the Taranaki Bight. We at the paper (I was one of five cadet weather map scribes, equipped with a pot of black ink and old fashioned nibbed pens that sometimes caught the paper and caused blots) never disclosed to the reading public what we did. Neither did my sister-in-law while working at a woman’s mag tell the readers her job each week was to “make up” the zodiac sign predictions. Nor did we let on at the Star the day we were “forced” to beat up a minor fire in South Auckland to become a raging inferno in order to scrape up a front page lead one quiet day. AI is completely different, of course.

    Jim Tucker – editor, writer, publisher JimTuckerMedia New Plymouth

  2. Gavin Ellis – Gavin Ellis is a media consultant, commentator and researcher. He holds a doctorate in political studies. A former editor-in-chief of the New Zealand Herald, he is the author of Trust Ownership and the Future of News: Media Moguls and White Knights (London, Palgrave) and Complacent Nation (Wellington, BWB Texts). His consultancy clients include media organisations and government ministries. His Tuesday Commentary on media matters appears weekly on his site www.whiteknightnews.com
    Gavin Ellis says:

    There was a lovely, very flattering image of Mr Tucker attached to the post but it could not be loaded. It was, of course, the product of AI.
    A confession of my own: One day many, many years ago the horoscopes did not arrive from our syndication service. I re-ran the day’s predictions from the previous week. No-one noticed.

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.