Cute AI cats may be fun, but crime scene body bags cross the line

Cute cats dancing a tango on social media may be a bit of fun, but posting AI-generated body bags in a real-life crime scene image defies any common standards of human decency.

The Herald on Sunday’s lead story this week revealed that a Facebook page “dedicated to sharing factual stories sourced from police and trusted news platforms” shared a fake image of body bags being loaded into an ambulance at the scene of an alleged triple homicide in Hastings on April 19.

The incident involved the discovery of a mother and her two young children found dead in their Hawkes Bay home. A 36-year-old man has since been charged with three counts of murder.

The image was posted on a Facebook page called Australia/NZ Crime TV. The post has now been removed. It purported to show a cordoned-off scene with two police cars and two ambulances, into with body bags were being loaded. Quite rightly, the Herald on Sunday chose not to publish the image.

When contacted by the Herald on Sunday a person identified only by their forename said the use of AI was being reviewed and that “our previous use of AI has been limited to generating general graphics that provide visual context to our stories”

The site’s opening title includes the clause: “Some images are altered for legal reasons as investigations are ongoing.” In fact, the use of AI-generated images on the site is extensive and includes the re-rendering of crime scenes. And if that disclosure is expected to warn users of the extensive use of digital fabrication, it falls way short. Continue reading “Cute AI cats may be fun, but crime scene body bags cross the line”

Good reasons why Skinny’s clone Liz needed more than AI

There is a glimmer of good news for all those bright, talented creatives in the advertising industry who think artificial intelligence is going to steal their jobs.

A study by Australia’s Monash University on behalf of the advertising agency TBWA has found that human creative concepts always outperform generative AI creations.

Does that mean that Liz from Kerikeri should not have bothered handing over her biometric data to become the cloned frontwoman of those Skinny mobile tv commercials that proudly proclaim to be made with the help of AI? No, because the irritatingly obvious creations were the brainchild of real people and the use of AI was trumpeted by Skinny and its parent company Spark as a clever marketing ploy.

The Australian study was aimed at testing whether artificial intelligence could, in fact, replace creatives in dreaming up advertising ideas and slogans. The finding: AI consistently fails at creativity.

The Monash researchers took 1000 creative advertising campaigns and fed them into large language models (LLMs) that were first asked to strip the advertising messages down to single sentences. The cunning academics then fed the single sentence into the AI machines and asked them to create advertising campaigns.

TBWA Sydney chief creative officer Matt Keon told The Australian that the LLMs had quickly removed all of the creative elements while reducing the campaigns to single sentences and then consistently failed to produce such elements in their recreations of the campaigns. Continue reading “Good reasons why Skinny’s clone Liz needed more than AI”

AI-created editorials: What in HAL’s name was the Herald thinking?

Integrity is the most valued element of a news organisation’s reputation. Without it, it cannot expect its audience to lend credence to what it publishes or broadcasts. So, the New Zealand Herald has dealt itself an awful blow.

Its admission that it used generative AI to scrape content and then create an editorial about the All Blacks came only after it was caught out by Radio New Zealand. RNZ’s subsequent revelation that it may have found another three robot editorials in the Herald was met with sullen silence.

All the country’s largest newspaper will say its that it should have employed more “journalistic rigour”.

That is not good enough. It does not explain why the paper made the bizarre choice to employ Gen AI to create what should be its own opinion. It does not explain why there was no disclosure of its use (although to do so on an editorial should raise more red flags than a North Korean Workers Party anniversary). It does not tell us how widespread the practice is within publications owned by NZME (the Herald editorial was reprinted in its regional titles). It does not explain why even the most basic sub-editing was not applied to an obviously deficient piece of writing when editorials have previously been checked and rechecked to prevent the most minor of errors. And it does not reveal what went wrong in the editorial chain of command to allow all or any of the foregoing to occur…or not. Continue reading “AI-created editorials: What in HAL’s name was the Herald thinking?”

EU framework for AI laws: First steps to taming a beast

The European Union has agreed to pass the world’s first laws governing the use of artificial intelligence. It is one step on a long and winding road.

It is unsurprising that this initiative came out of the EU. It has been the only governing body to consistently put its people ahead of the wishes of the companies that control the search and social media platforms that intrude into virtually every nation on the planet.

The historic agreement came after 36 hours of solid negotiation among the EU member states and it sets out the parameters on which the laws will be based.

The move is hugely significant, but it should not be seen as a full solution to curbing negative impacts while allowing the positive aspects of AI to flourish.

It aims to ensure that AI systems used inside the EU are safe and respect fundamental rights. In other words, it is based on a harm principle. That means it will target high impact AI systems that pose potential risks and strictly limit the use of potential AI tools for state surveillance. Continue reading “EU framework for AI laws: First steps to taming a beast”