We humans have always had a bit of a penchant for futile exercises.
The ancient Greeks had a death-defying king rolling a large boulder up a hill for eternity. Much later, the Japanese invented an infuriating game called Whac-a Mole.
Now the media are trying to stay a step ahead of generative artificial intelligence.
Media companies around the world are grappling with editorial guidelines to deal with a digital phenomenon that can be both a tool to enhance their productivity, and an insidious weapon that can be used against them.
Some see it as an existential threat that should be banned outright but, really, artificial intelligence is like firearms and opioids – useful in the right hands but extraordinarily dangerous in the wrong ones. And, like drugs, its legitimate use needs to be carefully prescribed.
Many media companies have had policies on the use of artificial intelligence that have been broadly based on the principle of transparency. Often, they were an extension of their guidelines on the manipulation of images by software such as Photoshop. Now, however, they are in the process of updating those guidelines to cope with generative artificial intelligence, which (on request) seems to create something that wasn’t there before.
Earlier this month, the Globe & Mail in Toronto updated its readers on a new set of guidelines it had sent to staff on the use of generative AI. It prefaced its outline with the following:
Over the course of a few short months, artificial-intelligence tools for working with text, photography and video have moved from the realm of science fiction to reality. Like many organisations in industries across the country, The Globe is devoting significant energy to thinking through how these new tools will change the way we work – particularly the promise of AI as a journalistic tool, but also the threat it may pose to truth, trust and transparency, which are the core tenets of The Globe and Mail’s journalism.
Its specific guidelines allow the use of AI in research, but all material must be treated with scepticism and regarded as unverified. Reporters are prohibited from using AI tools like ChatGPT to condense, summarise, or produce writing for publication. It cannot be used for editing stories. Image-creating AI tools must not be used in news photography or news video, and if used to create features illustrations must carry a disclosure line.
The guidelines allow for the use of generative AI in stories about its development. They also acknowledge that there is “a flood of new tools coming online every week – some with questionable ethics and opaque terms of use”. Staff are allowed to experiment with these products…after seeking approval.
The “guardrails” are characterised as a starting point and are likely to change as the technology – and the Globe & Mail’s thinking about it – evolves.
In New Zealand, RNZ is in the throes of developing new policies on AI and, in the interim, has banned the use, publication or broadcast of AI-generated content except as part of a story on the subject. It has also warned staff against using confidential or sensitive data in AI tools such as transcription services.
Stuff also has teams considering AI developments and the use of AI tools. Its current guidelines to staff allow the use of AI tools but require transparency about their use, and state AI-generated content must meet the same standards of accuracy, fairness, and balance as any other content.
New Zealand Herald publisher NZME allows the use of AI technology where appropriate “and when we can verify accurate content that meets our trust and quality standards”. NZME requires staff to seek prior approval from editors before using the tools.
BusinessDesk employs Chat GPT to write articles from company announcements to the New Zealand Stock Exchange in order the speed publication. Its policy requires such stories to carry a ‘BD AI’ by-line (a little obscure for the uninitiated?) but such items do not replace follow-up stories, which must continue to be written by reporters.
These guidelines, however, focus on the use of AI by media organisations. That is only one side of the coin and, frankly, it is the easier face to fashion.
The other side of the coin – the use of generative AI to create disinformation – is as difficult to control as guns and drugs in the hands of gangs.
Tools such as ChatGPT have the ability to not only create text but to mimic the tone and style of a real player. To test that facility, I asked ChatGPT to create two media releases on National Party policies. Not only did it capture a policy announced on the day I wrote this commentary, but the character of the text was indistinguishable from a bona fide release from the party. All that was missing was the logo and contact details (easily accessed on the web).
The ease with which AI constructed basic text, into which an additional piece of disinformation could be inserted, was a stark reminder that today’s journalists must take nothing for granted. ChatGPT’s ‘media release’ would require a verification call to party HQ, although there was a clue in the text – it used American spelling in two paragraphs.
Such verification is time-consuming but relatively easy to achieve by checking with a known source. That is not so easy when the source is someone posting on social media but the need to verify is equally necessary, probably more so.
Verification of text has its challenges, but they pale alongside the difficulties that are increasingly being faced in validating images and video.
When ‘fake news’ videos made their appearance a few years ago there were tell-tale signs such as lack of definition around the lips and unblinking eyes. Since then, the machine has been learning and such irregularities have disappeared. Detection has become far more difficult as AI tools such as DALL-E, Midjourney, and DeepAI can produce photorealistic images in seconds.
Earlier this month, Jeffrey McGregor, the CEO of Truepic (a tech company that produces image authentication software) spoke to CNN about a fake image purporting to show an explosion near the Pentagon. He described the picture as the tip of the iceberg that is to come and added: “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it…When anything can be faked, everything can be fake. Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”
The fake-creating AI tools are good, but they are not perfect and the German state-owned broadcaster Deutsche Welle has produced a checklist to detect AI images:
- Zoom in and look carefully (magnification reveals data errors)
- Find the image source (reverse image search tools such as TinEye find original images)
- Pay attention to body proportions (AI generated images often wrongly size ears, hands, and feet)
- Watch out for tell-tale errors (AI tools have trouble reproducing hands, teeth, jewellery, and glasses)
- Is the image too smooth? (AI seldom renders skin blemishes)
All of this requires vigilance and a keen eye. For example, an image of the Pope in a large white puffer jacket went around the social media universe and back before anyone noticed that he had only four fingers on his right hand and unusually long fingers on his left.
In May the BBC launched Verify, which will be tasked with authenticating video supplied to the broadcaster. In all, BBC Verify comprises about 60 journalists with a range of forensic investigative skills and open source intelligence capabilities. In addition to video and image verification, the team will carry out fact-checking, counter disinformation, and analyse data.
While no New Zealand media organisation could come close to such a resource, it does indicate the breadth and scale of the problem of AI-generated content and disinformation campaigns.
The tsunami may be so big that the ultimate solution will not lie in detecting fakes but in producing genuine authenticated images and professionally verified fact.
That is the thinking behind an organisation called the Coalition for Content Provenance and Authenticity (C2PA). Established in 2021, its members include the BBC, Microsoft, Adobe, Intel, Sony and Truepic. It is developing technical specifications for content provenance and authentication that can be built into images and video as they are created. This metadata stays with the material and is updated every time a change is made to it. Legitimate images and video will carry a small identifier that links viewers to its history. Material that has been wrongly manipulated and AI-generated material will trigger a warning.
Only time will tell whether C2PA will become the worldwide standard for authentication, but it points the way to the news media’s ultimate weapon against malevolent AI-generated disinformation – truth.
Truth, however, is not simply a matter of verifying facts. It also requires the building of trust so that people believe what you tell them. And achieving that is as challenging as repelling the robots.
