AN ADDRESS TO THE NELSON BRANCH OF THE NEW ZEALAND INSTITUTE OF INTERNATIONAL AFFAIRS 29 OCTOBER 2025
Misinformation and disinformation are often confused. So, to start, let’s be clear on what we are talking about. Misinformation is false or misleading information that has been created inadvertently and includes honest mistakes. Disinformation is false or misleading information deliberately spread to manipulate a person, social group, organisation – or, indeed, an entire country. It is sometimes called malinformation.
We are not concerned here with honest mistakes or sloppy inaccuracies. We are talking about disinformation. There is another phrase to describe it: Weaponized lies.
This may be seen as a 21st century scourge, but disinformation goes back a very long way.
In fact, disinformation is as old as antiquity.
Julius Caesar was a fast and loose player with the truth, particularly in demonising the Gauls. His heir, Octavian, waged a concerted disinformation campaign against Mark Antony, characterizing him as a drunk and a womanizer who had been corrupted by the Egyptian queen Cleopatra. He didn’t have newspapers, so he used speeches, writings, graffiti and even meaningful symbols on coins. And if you think Nero fiddled while Rome burned, you are probably wrong. There are reports that he rushed back from his villa outside the city when he heard news of the fire. The fiddling stigma is what has endured.
That is because disinformation can be enduring.
We still see the storming of the Bastille and the freeing of a multitude of political prisoners as the enduring symbol of the French Revolution. The prison is believed to have contained seven prisoners.
Come forward to the nineteenth century: Specifically, to February 15 1898. The Cruiser USS Maine blew up in Havana Harbour and the New York Journal ran banner headlines blaming Spain…with not a shred of evidence that it was responsible. This was one of the major catalysts for the Spanish American War. The blame almost certainly lay with firedamp firedamp created in the ship’s coal bunkers…an explosive accident. The Journal’s publisher, William Randolph Hearst, is reputed to have sent a telegram to newspaper illustrator Frederic Remington in Cuba. It supposedly read: “You furnish the pictures. I will furnish the war”.
The twentieth century saw disinformation institutionalised and, in many ways, normalised. In 1921, the American journalist Walter Lippmann wrote a book on public opinion that had been informed, in part, by the demonising propaganda in the First World War. In it he talked about “the pictures in our heads” He added “But what is propaganda, if not the effort to alter the picture to which men respond, to substitute one social pattern for another?” While not all propaganda is disinformation, weaponised lies are among its most potent forms.
Nazi Germany had a genius – albeit an evil one – heading its Ministry of Propaganda. If he had been born in America, Joseph Goebbels could have taken Madison Avenue’s advertising world by storm. He was a gifted persuader. After the war, an academic distilled 19 propaganda principles from Goebbels’ extensive diaries. Principle 7 states that credibility alone must determine whether propaganda output should be true or false. No moral determinants: Simply what is believable. He also advocated disinformation when truth was inconvenient.
There are countless examples of the Nazi regime’s use of disinformtion. On was a newspaper that was published on the eve of the invasion of Poland. It was all Poland’s fault: Gott in Himmel, they were getting ready to invade Germany!
But do not, for one moment, think disinformation was only the resort of villains.
During the Second World War, a man named Sefton Delmer ran a radio station with the call-sign Gustav Sigfried Eins. It broadcast in German, sounded like other German radio stations but it was, in fact, a British Government Black propaganda operation that spread disinformation. There were also fake German newspapers produced by the British during the war.
During the Cold War, both the U.S. and the Soviet Union used disinformation as a core tactic. The superpowers competed in an arms race of fictions. Two small examples: The Soviets blamed the U.S. for the AIDS epidemic on the African continent, while the Reagan administration helped to spread stories of “booby trapped” children’s toys sold to Afghan families by the USSR.
Recent researchers estimates that 64 percent of national elections worldwide were targeted in some way by either the United States or USSR during the Cold War. The United States alone attempted to instigate 63 regime changes in foreign countries using only covert methods such as disinformation. Twenty four of these attempts succeeded.
So, disinformation is nothing new, nor is its use by states and their agents. However, the digital age has imbued weaponised lies with new levels of potency.
Before outlining some of the international dimensions of disinformation, I want to briefly outline why it can work so effectively anywhere in. the world.
Why do people believe lies?
We all have built in beliefs and biases. We are conscious of some of them, but many are buried within our psyche. It’s called unconscious bias. Skilled purveyors of disinformation know how to target biases and exploit them. We believe what we want to believe, whether we know it or not.
Unconscious bias includes (but is by no means limited to) these types:
- Anchoring (perceptions of our own identity)
- Band-wagoning (people think as we do)
- Blind spots (we don’t know what we don’t know)
- Confirmation (support for our own worldview)
- Information (we get information for a particular silo)
- Authority (we tend to accept or reject it)
The manipulators then combine bias with emotional triggers – powerful feelings over which we may have only limited control: Negative emotions such as fear, disgust, anger, and – most significantly – hate. Or positive emotions such as joy, surprise, affection and – most significantly – trust.
Then we can add ignorance, and our willingness to socialize or spread whatever we hear, irrespective of whether it is fact of fake.
Combined, this becomes a concoction that is very powerful indeed. It is a potion harnessed by both state actors and agents of influence.
How does that translate in the digital age?
The triggers remain the same – they are as old as the human race . Deception, manipulation, and influence by state-sponsored actors continue to embrace everything from word of mouth, public meetings, the print media and so. However, the digital age has seen the means of delivery – and forms of manipulation – expand exponentially. It has also placed the means of mass distribution in the hands of groups that, in the past, had limited ability to spread their messages.
Both friends and potential foes can (and do) indulge in orchestrated lies. However, I am going to confine myself to countries that may pose potential or real threats to this country. The threat may be in the form of attacks on democracy and institutional trust, or radicalising the disaffected.
It’s important to note that the New Zealand Security Intelligence Service’s latest report on New Zealand’s security threat environment states it has not seen any sophisticated state-backed information operations directly targeting New Zealand.
However, it notes that many New Zealanders have almost certainly consumed information manipulated by foreign states, even if they are not the target audience of that information.
There are foreign states that see strategic advantage in distracting populations or damaging social cohesion by exacerbating tensions between social, ethnic, or political groups.
Some offshore violent extremist groups are continuing to use online spaces for recruitment and radicalisation, and target impressionable young people, indirectly including in New Zealand. These groups take advantage of social divisions, offshore conflicts or crises to push malevolent or violent rhetoric. We saw it during the Covid pandemic.
The fact that this country has not been directly targeted is not grounds for complacency. We remain a potential target, and the threat from disinformation consumed by random, untargeted New Zealand online audiences, while very difficult to measure, is certainly real. A two-year projection of global risks by the World Economic Forum this year put misinformation and disinformation at the top of its 2027 risk assessment.
Britain’s Foreign Office stated the threat succinctly. It said: “[State sponsored] actors are creating and using online and physical networks to promulgate malign narratives and content, targeted at specific audiences, at volume” [my emphasis].
We cannot discount the likelihood of our ‘friends’ – in Five Eyes and beyond – using nefarious means to influence others, but here I am going to concentrate on four ‘malign actors’ identified by the Foreign Office and MI5: Russia, China, Iran, and non-state terrorist groups. We may not be directly in their sights, but what they do and how they do it is instructive – and a useful guide to where we need to be vigilant.
Let me start with Russia. I’m going to concentrate on the Russian Federation somewhat as it provides very good examples of how disinformation operates.
Its disinformation infrastructure is sophisticated, and large. In a report prepared for the New Zealand government it was described as “arguably the most capable actor in the autocratic world and the most proficient at leveraging misinformation and disinformation to promote its foreign policy agenda”. The Russian state devotes considerable resources to coordinated networks that promote disinformation at scale. Its activities have expanded enormously since its annexation of Crimea and invasion of Ukraine.
The Kremlin’s disinformation operations have been described as “an outsourced propaganda ecosystem” because it involves both state and non-state actors overseen by the Presidential Administration of the Russian Federation. This includes not only the official government sources, and media like RT and Sputnik, but also a wide range of actors supported by different intelligence services. The Russian Orthodox Church also plays a role, and Russian oligarchs use their own reach to amplify Moscow’s message, as do various Kremlin-associated media outlets operating outside the Russian Federation. Russia also uses more specialized outlets to reach different audiences.
These various sources spread propaganda in many languages, and Moscow scatters the sources among a wide range of supposed originating countries, such as India and Canada as well as Europe and the United States. Many articles are originally published in English for wide distribution.
By design, these Russian sources interact with each other and build momentum for particular stories. Accounts on Facebook, X, and other social media – including encrypted services – create rumours and chatter that official news media then “cover.” This coverage, in turn, is used to bolster social media campaigns by allowing them to link to official sources.
Russian disinformation sits within a broader strategy that includes network infiltration and cyber espionage – they are now hacking the systems of even small companies that are supporting Ukraine’s defence – plus infrastructure denial of service. Microsoft’s 2025 Digital Defence Report found that the strategy is making significant use of existing cybercrime circuits.
I want to use one example to demonstrate the way Russian disinformation is propagated.
It is a project called the Doppelganger campaign. Doppelganger was only one of a coordinated series of deceptions. It is the product of Russia’s Social Design Agency and began in 2022 to support Russia’s goals, principally in relation to Ukraine, but it encompasses a broad international spectrum.
What makes it interesting is that it uses a hybrid adaptation of various media formats to achieve its ends.
The operation became notorious for making clones or mirror sites of legitimate websites to spread disinformation This included news sites, government websites, and a variety of other platforms. Content is often amplified through fake personas on social media, as well as through paid advertisements.
The “Doppelgänger” component consisted of an estimated 700 mirror sites. According to the FBI, SDA acted as coordinator and operator for a campaign that indicatively produced and disseminated almost 40,000 pieces of content and 33.9 million comments around the world between January and April last year. Perhaps its greatest success lay in being discovered. Western media characterised it as a stunning example of how malicious – and how bloody clever – the Kremlin is.
In 2022 sanctions against the Russian Federation cut off a number of its ‘legitimate’ purveyors of disinformation like the satellite service RT (It was carried in New Zealand on Sky but dropped after the invasion of Ukraine) plus the Sputnik news agency.
Russia’s banned overseas focused news services were replaced by mirror websites. Those clones were aided by typosquatting — that is registering domain names that are slightly different from legitimate, popular websites. Many people have encountered them when they don’t get a URL quite right when they type it into their browser. When they click on one of the SDA ones, they will be redirected to the relevant malicious mirror site.
Both tactics were employed on an unprecedented scale to mimic credible Western news sources like the UK’s Guardian and local news websites in the United States. They served as vehicles for content distribution. The links to the websites carrying malicious content were then disseminated through networks of social media groups and amplified by a swarm of trolls.
As the use of mirror sites indicates, Russia is a widespread user of false flags – misrepresenting sources or involvement. We don’t yet know whether SDA was involved in this one but in October Russia excelled itself with what the fact-checking site NewsGuard called a double false flag over the extraordinary theft from the Louvre. It seeded social media with a story that French authorities had found a Russian passport at the scene – implicating Russia in the jewel heist. But the posts were brimming with sarcasm and claimed the passport had been planted by the French police to discredit Russia. Think about it: If the thieves were smart enough to steal $178 million worth of jewels in seven minutes, would they really be dumb enough to leave a passport behind? Had to be a plant, didn’t it? In fact, there is no evidence that any passport, irrespective of nationality, was found by Paris police.
SDA has flooded several social media platforms with content, at high volume. The expectation is that algorithms push this content into real users’ social media feeds, and that they may consequently be drawn down the Russian discursive rabbit hole. There is evidence that Russian interests have infiltrated some social media platforms to manipulate algorithms. Romanian intelligence, for example, discovered manipulation of TikTok during last year’s presidential election that was likely to have been state-sponsored and pointed the finger at Moscow.
Russia also makes widespread use of online advertising to spread disinformation. A study released in January found SDA posted more than 8000 political advertisements on Facebook despite restrictions that barred companies from doing business with SDA. The tripartite study found the agency had evaded what it judged to be lax enforcement by Facebook to place an estimated $338,000 worth of ads aimed at European users over a period of 15 months, even though Meta itself had highlighted the threat.
What does its disinformation look like? Among its triumphs was a story stating that Vladimir Zelensky’s wife had gone on a luxury buying spree in Paris. More recently, there has been a story stating that the Ukrainian president has added a $US79 million Wyoming ranch to his $US682 American property portfolio. Trolls made sure the stories were widely shared and discussed on social media.
On July 30, during the Moldovan election campaign, a fake website imitating OK! magazine published a fabricated article, with the shocking headline alleging that President Maia Sandu illegally bought sperm from gay celebrities (including Elton John) to conceive a child. It carried the byline of a real reporter. It was distributed not only in the East European nation but also internationally in different languages. It was ‘outed’ as fake and Doppelganger was said to be the source. Maia Sandu won the election nonetheless.
A study published by Lund University has calculated that the Social Design Agency was the source of 44 per cent of Russian propaganda in Germany, 41 per cent in France, and 31 per cent in Italy. In little over a year, SDA successfully filled the information void created by sanctions against Russian state media.
It didn’t care whether the disinformation was exposed. Exposure did what? It repeated the disinformation and gave it even more exposure. Discredited? Only for some. Because, as I said earlier, we believe what we want to believe.
SDA has also been implicated in contributing to one of Russia’s most audacious moves: To subvert the very process of fact-checking . It began faking fact checking. The result was War on Fakes. War on Fakes is a whole network of sites that take fact checking to sinister depths by ‘proving’ fake stories are ‘correct’.
Let me now turn to China.
James Kynge, a senior research fellow at Chatham House recently told a Parliamentary enquiry that “China’s disinformation approach pretty much spans the waterfront.” It is a concerted, whole-of-Government approach, backed by very strong Chinese Government organisations
China’s information operations are conducted by a range of actors: the PLA’s Strategic Support Force, which conducts cyber operations as part of the PLA’s political warfare; the Ministry of State Security, which conducts covert operations for national security; the Central Propaganda Department, which oversees China’s domestic and foreign propaganda efforts; and the Ministry of Public Security, which enforces China’s internet laws.
China’s state-media outlets and Ministry of Foreign Affairs officials are also running covert operations that amplify their propaganda efforts. And increasingly, as demonstrated by takedown reports from major platforms, China’s government agencies are conducting collaborative information operations — outsourcing to Chinese companies as well as non-state actors such as hacktivist groups (those breaking into secure networks for social or political goals).
Taiwan is a particular target for Chinese disinformation. The Taiwan National Security Bureau has calculated that there were 2.2 million disinformation messages from sent by China to Taiwan last year, almost double the number in 2023. The aim of these attempts was to sow distrust in the Taiwan Government and to sow distrust towards the US, towards Taiwan’s armed forces and political structures.
One China-linked disinformation actor named Dragonbridge began its activity in 2019 and was described by Google’s Threat Analysis Group in 2022 as the most prolific information-operations actor it has tracked
In 2022, following the announcement of a possible visit to Taipei by then Speaker of the US House of Representatives, Nancy Pelosi, Dragonbridge shifted some of its focus towards her, her family and their finances. Around the time of the 2024 US election the Chinese disinformation narrative tried to persuade the Taiwanese public that the US was unreliable. There were stories about the Taiwanese Government harvesting organs and blood to sell to the US. There was disinformation about the US poisoning pork and exporting it to Taiwan. It embarked on concerted campaigns against the Taiwanese president, during which it employs AI extensively.
China not only has extensive resources devoted to the task, it also employs guns-for-hire. A study by the International Institute for Strategic Studies has found that disinformation-for-hire exists and is increasingly lucrative.
China’s disinformation campaigns employ material produced to order by content farms in Malaysia, which has been described as the epicentre of Chinese-language disinformation campaigns likely orchestrated by the Chinese state. It is also making increasing use of small media organisations with ambiguous ties to the Chinese state, and therefore better able to disguise the origin of disinformation. Like Russia, it makes extensive use of fake social media accounts to retweet and repost material. For example, data analysis found one post was simultaneously reposted to 19 other sites, and the propagation was exponential from there.
Beijing does not limit its activity to the Asia-Pacific region, although the Centre for Strategic and International Studies reports an “increasing sophistication of China’s information operations” in the region. While its immediate strategic interests may lie there, its commercial interests are global.
In 2022, MI5 Director General Ken McCallum stated that the agency was “seeing an increasingly assertive Chinese Communist Party using overt and covert pressure to bend other countries to its will”, with the “Chinese authorities playing the long game in cultivating contacts to manipulate opinion in China’s favour.”
The UK is a particular target in relation to Hong Kong. Other NATO allies have, however, also expressed concerns over sustained malicious cyber and hybrid activities, including disinformation, stemming from China.
Canada has had a fractious relationship with the People’s Republic of China since the tit-for-tat arrest of each other’s nationals in 2018. In 2019 Canada set up the whole-of-government Security and Intelligence Threats to Elections (or SITE) task force
A retrospective report Canadian Government on its 2025 general election, released in October, acknowledged that, in advance of the election campaign, the SITE task force had assessed that PRC officials and proxies were likely to conduct foreign interference activity using a complex array of both overt and covert mechanisms. They were highly likely to use AI-enabled tools, and to leverage social media. It also was highly likely to target Chinese ethnic, cultural, and religious communities in Canada using clandestine and deceptive methods.
The report revealed that over the course of the 2025 campaign, the task force had detected an information operation launched on the most popular news account on the Chinese-owned WeChat social media platform. Tactics included apparent outrage designed to encourage clicking on the links.
One target was a candidate who had been a vocal supporter of the pro-democracy movement in Hong Kong. A tactic was the filtering of search engine results to cast that candidate only in a poor light.
The campaign received high levels of user engagement and views. Articles received between 85,000 and 130,000 interactions, and an estimated one to three million views.
The articles were amplified by a group of 30 smaller WeChat accounts that Canadian intelligence linked to Chinese Communist Party’s Central Political and Legal Affairs Commission.
It didn’t work. The report found that, while foreign attempts to undermine the 2025 federal election were detected, when Canadians went to the polls, they freely and fairly exercised their right to vote.
Nonetheless, we can’t take comfort from that result. China’s disinformation structures and outputs are becoming increasingly sophisticated, and they will make full use of cutting-edge advances in artificial intelligence – not least in its ability to create new audio and video ‘realities’.
I will deal only very briefly with Iran, although I acknowledge it has for some time had extensive, organized, connected disinformation and propaganda networks
Intelligence analysts rate it as an emerging cyber-threat state actor. In the West, its efforts so far have primarily focused on targeting U.S. election infrastructure and disseminating election-related disinformation during the last three elections. Last year the US Treasury imposed sanctions on the Iranian group, the Cognitive Design Production Center – a subsidiary of Iran’s paramilitary Revolutionary Guard, Officials say the centre had worked since at least 2023 to incite political tensions in the United States.
The Gaza War has unleashed a tsunami of disinformation, albeit from both Israeli and pro-Hamas sources. After the Israeli attack on Iran in June, Tehran produced a series of AI-generated fakes depicted Iranian responses, including missile attacks on Israel and the downing of Israeli Air Force and joint-operation American aircraft.
Iran, too, makes extensive use of social media to spread disinformation, and that includes “super-spreaders”. In June, one pro-Iranian account – Daily Iran Military – saw its followers on X grow from just over 700,000 to 1.4m – a 100% increase – in under a week.
It remains to be seen whether New Zealand will be targeted over our government’s position on Gaza and the decision this month to reimpose sanctions on Iran over its non-compliance with its nuclear obligations.
Finally, non-state terrorist groups: The SIS’s 2025 assessment of the security threat environment assessed a violent extremist attack as a “realistic possibility”. In context, that is the second lowest on the five levels of threat. It says the most likely attack scenario in New Zealand is someone who acts alone, and who has been radicalised online. The Christchurch mosque attacker is an example. Given the focus of this talk on actors with forms of state sponsorship, I am not going to address supremacist and other hate-based groups.
The SIS assessment included a case study of an individual who over the past year almost certainly developed support for a faith-motivated violent extremist ideology through their consumption of online material.
The person consumed Islamic State (IS) propaganda that was designed to ‘prove’ the group’s religious credentials. They then sought additional religious guidance that reinforced IS messaging, and which led to their support for the violent extremist ideology.
The inference is that online radicalisation is a threat to New Zealand.
Disinformation and coercive material are produced not only by IS out of Afghanistan and Africa but also by unofficial outlets utilising its messages. This includes the appropriation of violent imagery to promote other forms of extremism.
Radicalisation targets impressionable young people. It’s unsurprising, therefore that informal sources promoting extremism include a large number of minors. The most celebrated case was in Belgium where a 12-year-old produced and disseminated Islamist extremist material.
Gaming imagery is common, mimicking popular video games such as Grand Theft Auto and Minecraft.The Christchurch mosque attacker’s bodycam video, which he livestreamed during the attack, was an appalling copy of digital combat game imagery.
IS is responsible for a number of informal news outlets that disseminate material across social media platforms. A study by researchers at West Point found one of these outlets, ‘Global Events’ had more than 22,000 followers on Facebook, and the outlet had corresponding TikTok, Instagram, and Threads accounts, as well as a Telegram channel which boasted more than 30,000 subscribers. The outlet has shared 80 video reels on Facebook between its creation in October 2023 and last February, which generated more than 1.6 million views.
The use of these informal outlets, created and proliferated at will by individuals and associated groups, are thwarting takedown rules and attempts by both platforms and government agencies to stem the flow. Users, however, believe them to be legitimate news sources.
The Institute for Strategic Dialogue in Berlin has also found that IS uses ‘sockpuppet’ accounts (existing accounts that are hacked and repurposed to spread Islamic State propaganda) which latch on to and build out popular hashtags on Twitter and Facebook. It should be said that the main targets for this material is in the Middle East and Africa but wider communities, thanks to the international reach of online media, are not immune.
New Zealand’s SIS says it has observed how the global resurgence of the Islamic State’s propaganda resonates with small pockets of New Zealand’s violent extremist environment. However, that environment is complex, is not limited to extremism that misrepresents faith, and its disinformation may be co-opted into a perverse pick-and-mix lolly bag. At-risk people are likely to adapt what they see and hear to fit their own worldview.
It might be identity-motivated or faith-motivated or mixed, unstable or unclear ideologies. The Christchurch attacker’s so-called manifesto was peppered with statements that cherry-picked different doctrines that he then rationalised in his own mind. That is testament, perhaps, to the fact that disinformation can be readily repurposed by those who believe what they want to believe.
Let me quickly add another bad actor: The Black Hat agent. They create fake news items designed to attract large audiences in order to gain revenue from platform advertising. Last week it was revealed that they had deluged Google’s principal news platform Discover with fake stories that not only proliferate but also pollute AI large language models. They can and are coopted by bad state actors.
Counter-measures? That, I’m afraid would take as much of your time as I’ve already taken.
One approach I like is called the Truth Sandwich. It’s a layered approach proposed by US linguist George Lakoff.
It starts with a fact, not the disinformation, which comes after the fact. It then explains the reality and sandwiches it with another fact. It is explained here:
But beating disinformation isn’t easy. There are numerous toolkits such as the UK Government Information Service’s RESIST toolkit – 60 pages on how to resist disinformation (here is a link to download it). There are fact-checking systems such as the BBC’s Verify, a team of 60 journalists who use highly sophisticated techniques to prove or disprove the material they receive.
But be in no doubt, disinformation is a minefield.
There is a quotation from Aldous Huxley’s Brave New World Revisited that sits atop the preface in the RESIST handbook: “An unexciting truth may be eclipsed by a thrilling falsehood”.
A more down-to-earth iteration is attributed to Mark Twain: “A lie can travel halfway around the world while the truth is still putting on its shoes”.
Except, he didn’t come up with the saying. In 2017, a ‘quotation hunter’ credited it to English satirist and author of Gulliver’s Travels, Jonathan Swift.
We need to be careful what we believe.
