I have a morbid fear that we Kiwis are not sophisticated enough to know disinformation when we see it. Worse, I worry that we don’t care.
The combination of dramatic advances in artificial intelligence and alarming declines in trust and social cohesion produce a dangerous mixture in which ‘reality’ can become a construct of what we want to believe, and what others may manipulate us into thinking.
Last Sunday, TVNZ screened the documentary Web of Chaos, which took viewers on a journey from the innocent early days of the digital highway to the sewer that part of it has become. Along the way, we saw its power to influence, corrupt, and deceive. In many respects it looked like a descent into madness. In fact, disinformation expert Dr Sanjana Hattotuwa described it as “an algorithmic amplification of psychosis”.
He was not speaking of a few unfortunates working through their mental issues on the Internet. He said there were 350,000 people in this country using alternative social media platforms – or what he called a “hellspace” – in a toxic mix of extreme attitudes, violent language and disinformation.
In the programme, Disinformation Project director Kate Hannah told how the Covid pandemic had drawn larger numbers of New Zealanders into “the disinformation space” and had led to a broadening of conspiratorial thinking. The documentary showed in graphic detail how that phenomenon had manifested itself near the end of the occupation of Parliament’s grounds.
At the conclusion of the programme, educator Andrew Cowie gave a glimpse into the impact that artificial intelligence was having on this already perilous environment, but the programme’s examples failed to project the level of sophistication that is now being employed. Nor was it factored into the optimistic portrayal of the generation of digital natives as tech-savvy sceptics who question the veracity of everything they see and hear.
Really? It is more likely that every generation is susceptible to manipulation if the right buttons are pushed. Emotion wins over intellect when it suits, or when someone knows how to generate a sufficiently strong emotional response.
Perhaps Web of Chaos wanted to end on a positive note despite its title, but I was more inclined to see what preceded that ray of hope as a true indicator of what we face as a society.
I particularly noted a comment by Professor Lisa Ellis (no relation) of Otago University on the social climate in New Zealand. She said that, when she came to this country from Texas, “the egalitarianism was amazing”. She lamented that this is no longer the case and that its decline was “a really dangerous trend”. She is right. Discontent provides fertile ground for disinformation and a widening space between haves and have-nots is full of rich furrows.
My admittedly dystopic view was influenced by three recent reports on disinformation. Two were from Recorded Future (a division of threat assessment consultancy Insikt Group), and the other was a special report in The Economist earlier this month.
The first report by Recorded Future showed how its analysts and R&D engineers had developed projects to test the malicious use of artificial intelligence. Each used off-the-shelf or open-source software.
In the first project they were able to generate deep fake video and audio of four of the company’s executives in conference calls promoting a bogus sponsorship deal. The researchers were able to bypass security measures.
In the second project, they successfully impersonated legitimate websites and sent manipulated information to targeted audiences, using techniques that ensured the information matched the recipients’ political viewpoints. To do so they used AI to automatically analyse data from legitimate news organisations that allowed the researchers to clone and template their websites. They were able to place one piece of information into many different cloned websites, across different languages, and targeting different audiences.
The third project was to develop malware that evaded detection. One of the most widely used detection tools is known as YARA Rules, a system of large-scale pattern matching that is constantly being augmented by the security industry. It is highly sophisticated and dynamic, even if the name stands for Yet Another Ridiculous Acronym. The engineers were able to beat the first level of detection and proved the potential for more advanced AI circumvention.
In the sort of language loved by intelligence analysts, the report concluded that “organisations need to widen their perception of their attack surface”. I think you can get their meaning.
The second report by Recorded Future examined a disinformation group being tracked by Insikt Group. Given the name CopyCop, it was identified as most likely emanating from Russia and linked to the Kremlin.
CopyCop uses AI to access and manipulate mainstream media outlets including Fox News, Al Jazeera and the French network TV5Monde. It ‘weaponises’ content by introducing partisan bias based on search prompts. Up to March, it had generated more than 19,000 uploaded articles. The researchers also found a CopyCop site impersonating the BBC website and a video sharing facility. They believe CopyCop’s operators have demonstrated the viability of large scale AI-generated disinformation.
The Economist’s cover featured fishhooks and the heading ‘Truth and lies, lies, lies’. The issue set out in chilling detail how disinformation has developed and how artificial intelligence will bring about exponential growth. And, just as the problem is getting worse, detection is getting even harder.
Spotting deep fakes may need to become a recognised branch of science. The Economist lists tools developed by a branch of the US Defense Department. They include heartbeat detection through minute variations in skin tone on the forehead (deep fakes are arrhythmic), and forensic analysis of background sound in synthesised voices. These are not the tools the average punter is likely to be able to employ. Nor do many organisations possess the skills or technology to detect ‘seeder’ networks like CopyCop or ‘spreader’ social accounts that disseminated disinformation across the spectrum.
Much of the defence against disinformation is after it has been disseminated – and likely already done damage. Only four countermeasures are widely endorsed. They include labelling suspect content, fact-checking and debunking, content moderation and removal, and public education on media literacy. Only the first two have been proven effective, but that comes with a strong caveat. We believe what we want to believe.
The Economist recounts a disinformation campaign that stated Olena Zelenska, the wife of the Ukrainian president, had spent 40,000 Euro during a Paris shopping spree. This was then topped by $US1.1 million spent in the stores of Fifth Ave in New York. Earlier the couple had bought Joseph Goebbels’ old villa in Germany. None of this was true but was part of a chain of disinformation that spread through continents and languages. At its peak the shopping spree story was being mentioned 1000 times a minute on X (formerly Twitter). All this was despite the story being debunked. Its spread was aided by ‘coordinated inauthentic behaviour’ (likely linked to Russia) that was designed to fool platform algorithms.
All this malicious computer science has a steadfast ally – human nature. It plays on our willingness to let our baser feelings bubble to the surface. In Coming Up For Air George Orwell wrote about a meeting of The Left Book Club and a lecture on Fascism. He was obviously unimpressed by the speaker – “a sort of human barrel-organ shooting propaganda at you by the hour” – but he saw his tactics from a mile away.
“The same thing over and over again. Hate, hate, hate. Let’s all get together and have a good hate.”
So long as we have something to hate (and irrespective of the degree), disinformation will thrive among us. We won’t see it as falsehood. We will see it as our version of the truth.
