Kensington Palace’s family snap did the world a great service

It was one hell of a way to do it, but Kensington Palace did society a huge favour by releasing a digitally altered photograph of the Princess of Wales and her children.

The photograph – distributed then withdrawn by picture agencies around the world – may have finally brought home to the general public the fact that they can no longer take ‘reality’ for granted. And it sent a message to the media-knocking public that charges should not always be laid at the feet of journalists.

The amateurish nature of the manipulation (I doubt the palace has the latest version of Photoshop) made it easy for the public to see that the picture of the family had been altered. Poor Prince Louis appears to have had a terrible accident to his right hand. Princess Charlotte’s wrist appears to have suffered a nasty sideways fracture. And shock, horror: Kate’s wedding ring seems to be missing.

There were further anomalies apparent to the trained eye and forensic investigation. The Guardian’s imaging team believes it may be a compilation from several frames and has identified 20 anomalies with the picture.

Princess Catherine has taken responsibility for the ‘editing’ but The Firm has too many systems and servants for that to be particularly credible either.

 The nett result was a PR headache of enormous proportions for the Royal Family and an injection of rocket fuel into the rumour mill speculating about the health and marriage of the princess.

Eric Baradat, director of photography at Agence France Presse said all the picture agencies had had “total trust with the material that Kensington Palace is usually sending out”. It was a signal that such trust had now been damaged.

That damage extends well beyond the reputation of the British Royal Family. It was a message as loud as a Led Zeppelin concert that the public should not trust what they see.

Of course, image manipulation is nothing new. Addressing the latest royal example, historian Professor Kate Williams on CNN recalled an example of Tudor-period ‘fake news’ in a portrait of Henry VIII. Artnet News noted that Jacque-Louis David’s painting of Napoleon’s consecration as emperor not only makes Bonaparte taller than he was in life but also his wife Joséphine “is rendered more beautiful and youthful than her 41 years”. And Reading University research fellow Joshua Habgood-Coote, in an article for The Conversation, said Victorian photographers played fast and loose with pictures of Victoria and Albert.

W also find ‘combat’ photography littered with fakes – from Matthew Brady and Alexander Gardner rearranging corpses during the American Civil War to the re-staging of the raising of the flag on Iwo Jima in the Second World War.

And image manipulation was common in the pre-digital days in which I served the first part of my journalism career. Photographers used their photo printing skills to ‘burn in’ unwanted elements (reducing them to black or deep shadow), and photographs had areas ‘whited out’ so they could be clear-cut by photo-lithographers for the same purpose. In one case in Canada, a newspaper was sued after an editorial artist painted out a prize bull’s money-making bits in the interests of readers’ sensibilities.

The phrase ‘the camera never lies’ has always been qualified by the ethics of the photographer, darkroom technician, and editor. The unwritten rule in most newsrooms was not to cross a line where the perception of an event was altered. It was okay to remove a power line from a picture but not to take an individual out of a group photograph…or to insert someone who had not been present.

Ethics aside, the ability to manipulate images in the past was constrained by technological limitations. That is no longer the case and the present-day assault on reality may have reached the point where detection is very difficult. Had the palace utilised software with more AI power behind it, the manipulation may have gone unnoticed.

The power of artificial intelligence to alter images in ways that are almost undetectable has grown exponentially, and that power is not limited to still photographs.

Open AI is currently road-testing a system called Sora which is capable of turning a text request into startlingly realistic video. The Wall Street Journal’s Joanna Stern recently tested it out. Her verdict: “…amazement about the capability followed by fear for society”. We have come a very long way from the rather obvious deep fake of Barack Obama produced by Jordan Peele four years ago to show AI’s ‘capabilities’.

And, just as AI-generated manipulation has come a long way, so have levels of distrust. The Kensington cockup just made it worse.

Equally worrying, however, was a conversation I had with someone last week. She was telling me about sitting with her son watching a music video. She mentioned the people taking part in the video. The ensuing discussion went something like this:

“You mean they’re real?”

“Yes, of course they are real.”

“Why would they do that? It would be much easier and takes less time to just generate them with AI?”

In other words, there may be a growing acceptance of ‘created reality’ and a willingness to make no distinction between that and ‘real reality’. If that is so, society is in for some troubled times.

Already there are fears about the impact of AI on disinformation in the 50 elections around the globe this year, affecting about two billion people. The growing sophistication of the technology producing disinformation is making it harder to detect.

The United States, the European Union and other jurisdictions are working on legal frameworks to ensure that material created by generative AI is labelled as such. Last month 22 tech companies including Amazon, Google, Microsoft, and Meta signed a joint statement pledging to address risks to democracy during this election year, including AI generated disinformation.

Their job is made more difficult by growing susceptibility to such material, to say nothing of the fact that bad actors will treat legal protections with derision – and have the technology to circumvent metadata that ‘flag’ alterations.

News media are already burdened by low levels of trust, although the swift ‘kill’ notice on the Kensington photograph will have earned a few points. It is vital that the industry do its utmost to avoid becoming party to the spread of fake or manipulated visual material (still and video). It needs to develop collective defences. That means working together, even with those they see as fierce competitors. The stakes are that high.

They can, however, take advantage of one aspect of human behaviour.

We may no longer be able to believe what we see with our eyes, but we are still able to react negatively to attempts to deceive us. In short, no one likes being conned.

If news media are seen as a bulwark against deception, people will heed the warnings, and levels of trust will rise. It will require diligence and hard work. Some of that work will involve use of well-established practices such as verification and fact-checking. It will also include forensic detection methods.

Toby Wells, a professor of artificial intelligence at the University of New South Wales, last year published a book called Faking It. He finished the book with this:

As much as it pains me to admit as a human, the more we build, the more we will inevitably destroy. We are masters of the technology, but we make mistakes with it. We build it too fast with insufficient knowledge. We fail to understand it. We build it in ignorance of its potential. We are constantly afraid that the technology will do things we imagine it can do that it, in fact, cannot.

And so, I will leave everyone with one universal thought: The smartest person in the room is the one that closes the door.  

Can we close the door to the assault on visual reality? I have my doubts. Why? Because those closing words were not written by Toby Wells but – at his request – by ChatGPT.

One thought on “Kensington Palace’s family snap did the world a great service

  1. I think the impact of generative images on fake news will be moderate. It’s a difficult problem to right-size. I don’t want to deny any impact at all, but as your post illustrates, people will wise up after being tricked by a made-up image or two. In the next few years we are going to have to get good at identifying truth by its source. This is not new by any means, but it will become more important and we will need new technology, probably cryptographic signatures of some kind, to do it.

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.