No reasonable person who had to endure even part of the Christchurch mosque shooter’s livestream video could object to a law designed to stop that sort of perversion.
The Films, Videos, and Publications Classification (Urgent Interim Classification of Publications and Prevention of Online Harm) Amendment Bill is intended to do just that. It would make the livestreaming of objectionable content a criminal offence with a potential sentence of up to 14 years’ imprisonment. It would also impose heavy fines on the owners of platforms that delay or ignore takedown orders.
After the Christchurch massacre the chief censor moved to declare both the shooter’s video and his so-called manifesto to be objectionable (and therefore making it unlawful to possess them). It was the right thing to do but, through no fault of his own, it did not work. Yesterday, as I was writing this commentary, I did a Google search for the video. I was presented with the usual news stories about the gunman’s video – and the issues in removing it from the Internet – and on the second search page a link to a 16:55 minute recording of the livestreamed video complete with a Live4 logo. I opened the link to determine its authenticity and, once again, was confronted by death. I had seen part of the video on 15 March 2019 and had no desire to see it again. I watched only enough to verify it was more than the opening sequence broadcast by overseas media. A muzzle flash brought it all flooding back.
The Bill allows the Chief Censor to act in real time. In other words, a takedown order can be issued while the live streaming is still underway.
A recording, like the one I viewed yesterday, is covered by the existing Act. The usual process is that publications are submitted to the Chief Censor for classification. Under the amendment, the censor will be able to act unilaterally to declare material objectionable where its sudden appearance and viral distribution is “injurious to the public good” and order an immediate takedown.
There is an air of ‘act of last resort’ about the explanatory note accompanying the Bill. It envisages the platform carrying the objectionable livestream will first be invited to take down the material voluntarily. That seems a faint hope, given the anti-establishment nature of sites like those used by the Christchurch gunman and the slow reaction times of the major platforms that are used to further distribute it. Significantly, nothing in the Bill itself requires the Chief Censor to first seek a voluntary resolution. If Christchurch is an example, the censor will not have time for niceties. Nor should he or she.
The censor will need to carry a stick and the Bill provides not one but several.
Firstly, although criminal liability will apply only to the live streamer, online content hosts will be subject to “civil pecuniary penalty”. Put more plainly, that is a fine of up to $200,000 on each disregarded takedown notice.
Secondly, hosting platforms will lose the ‘safe harbour’ status they enjoy under the Harmful Digital Communications Act. That provision is a nod to digital platforms’ claimed status as ‘common carriers’ who can’t be held responsible for what individuals post on their sites. This Bill robs them of that status in relation to livestreaming of objectionable material. Explicitly it says they can be prosecuted for breaching this proposed part of the Films, Videos, and Publications Classification Act.
It will be interesting to see how that section fares during the select committee processes of the Bill. In Britain, harmful digital communication legislation is slowly wending its way through the Parliamentary system. A White Paper examining the issue noted “The new regulatory regime will need to handle the global nature of both the digital economy and many of the companies in scope. The law will apply to companies that provide services to UK users”. However, livestreamed actions like those of the Christchurch gunman are not intended for local consumption. They aim at a worldwide audience – as does much of the harmful communication the British wish to curtail – and use the transnational Internet to do so. This has led, in the case of emerging British legislation, to warnings about extraterritoriality or attempting to impose laws beyond your own judicial jurisdiction. Don’t be surprised to see the likes of Google and Facebook closing down their small New Zealand offices and toughing out attempts to bring civil actions against them for failing to observer takedown orders.
There is, however, a third ‘stick’ that, on the one hand, could overcome transnational intransigence but, on the other, has a Big Brother feel to it for New Zealanders.
The Bill makes provision for future mechanisms to block or filter out objectionable material. In other words, it could theoretically stop you seeing material irrespective of the attitude of the platform owner. Under the intent of the Bill this would protect the public from material that individuals and society in general would be better off without. The Christchurch video fits clearly in that category.
The Bill itself would authorise the Department of Internal Affairs to establish an electronic system to block or filter three categories of content:
- Material subject to an interim classification under urgency provisions (such as livestreamed content)
- Material already deemed objectionable under the classification system
- Material an inspector (a person appointed under the Act with wide powers of search and seizure) ‘believes on reasonable grounds to be objectionable’
The legislative definition of objectionable is necessarily wide. Make it too prescriptive and it is bound to miss material it should capture. However, there is always a downside to such legal mechanisms, and I find it useful to apply what I call The Trump Test. That test requires the legislation to be measured against how it might be misused by a populist leader who has coerced his party, manipulated the political and legal process, and interfered in the appointment of officials. Could such a leader use this legislation to block material in his or her own political interests? Could the electronic system be used to block other material beyond the jurisdiction of the Classification Office?
If I look for a present-day context – and mix jurisdictions – I might ask this question: Could it be used to block live coverage of George Floyd protests outside the White House on the grounds that demonstrators are carrying signs containing obscenities that denigrate the office of President?
I hold no fears for the use of the proposed legislation by the present Chief Censor but, on the basis that such laws should be fit for purpose under administrations we do not yet know about, this section of the amendment needs greater safeguards.
There must be a legislatively mandated governance and review body. It is not good enough that oversight be governed by regulations that can be readily changed.
Endorsement by the chair of that body should be required before a blocking filter can be applied.
That body should prohibit ‘mission creep’. It is one thing to block a specific piece of video, it is quite another to block classes of material. Already in the proposal is authority to block entire websites or online applications. And the electronic system could have uses beyond the current amendment’s purpose. It is a small technical step from blocking livestreamed murder to applying blocking filters to, say, pornographic material (currently under investigation by the Department of Internal Affairs). My championing of free speech does not extend to condoning the exploitation of women and children, but the use of such algorithmic filters overseas has proven to be problematic. More importantly, the current amendment bill must not be a Trojan horse for a wider-ranging electronic blocking system activated by a small amending extension to the legislation.
And, finally, there needs to be a news media exemption. There is a world of difference between a nobody seeking worldwide infamy by livestreaming his unspeakable crimes and news media covering events that might appear objectionable but which legitimately fall within the public’s right to know. New Zealand media coverage of the Christchurch mosque attack demonstrated they can be trusted to shield the public from objectionable material of the sort this proposed legislation is designed to curtail. It should not contain any possibility that it could be used against them by a less benign administration.