New hate speech laws are heading our way and we may have grounds to dislike them. In fact, we might hate them.
Communication Minister Kris Faafoi has provided Cabinet with an indication of what we can expect in legislation that was flagged following the Christchurch mosque attacks. We will see legislation that seeks to change the incitement provisions of the Human Rights Act and make incitement a criminal offence under the Crimes Act.
The thinking in Faafoi’s 11-page Cabinet paper parallels some of the thinking in proposed British legislation that is considerably more wide ranging. The UK Bill attempts to regulate harmful digital communications in way that far outstrips our own laws in that area by imposing on the likes of Facebook and Twitter a ‘duty of care’.
The 133-page draft Online Safety Bill has been presented to the British Parliament by the Minister of State for Digital and Culture, Oliver Dowden. It requires social media platforms to control ‘lawful but harmful’ content and provides the State regulator Ofcom with swingeing powers to make sure it happens.
Few will argue with either the British Bill’s intentions to protect children and vulnerable adults from the toxic effects of social platforms or New Zealand’s recognition that an horrendous act of terrorism has exposed the inadequacies of our existing controls on online hate.
The move to regulate those who thought they were beyond control has been widely applauded, but the Westminster proposals have already given rise to disquiet over their reach and, when draft legislation is produced here, we will see the same misgivings raised.
The greatest concern I have over measures of this type is mission creep, a term coined back in the 1990s by Washington Post columnist Jim Hoagland which means the gradual or incremental expansion of a mission beyond its original scope, focus or goals.
Mission creep will be due to a combination of unintended consequences and interpretation of the definition of key words.
Let me give you some examples of unintended consequence in the UK draft.
It has been interpreted as requiring social media platforms to police their messaging systems, including those with end-to-end encryption like Whatsapp. The Committee to Protect Journalists has pointed out that it encourages journalists to use these services to protect confidential sources and their ongoing enquiries. They will no longer be secure if that provision stays in the legislation (and the platforms introduce as-yet-unavailable scanning systems). Reporters’ use of services like Twitter clearly will be in the path of the content police.
Telephone conversations are specifically exempted from the legislation, as is online person to person audio services (e.g. Skype). There is no exemption for video communications. There is concern that platforms like Zoom will fall under the provisions of the bill and that providers will be required to monitor them. Will journalists’ Zoom meetings have an elephant in the room?
Another example: An impact assessment accompanies the draft Bill. It assesses, for example, the cost to the NHS of treating depression caused by cyber bullying. However, there is also a specific section titled “Intimidation of public figures”. It states that figures in the public eye, such as MPs, campaigners, and judges, frequently receive online abuse and threats.
“This is not only harmful to the individual concerned – it may sway them into making decisions against their better judgement. The fear of abuse and threats may also dissuade citizens (and certain groups in particular) from entering public life, for example by standing for election.”
Does this point to the use of regulation under the legislation to create a special category of protection? Imagine the effect on social media distribution of news items if the platform operators can censor the sort of comments about public figures that we see appended to posts every day. Never mind the offending comment: Thee result could be removal of the news item itself, followed by a laborious process to have it reinstated.
The greatest risk of mission creep, however, will lie in interpretation of key words and phrase
Graham Smith, the British author of Internet Law and Regulation (one of the leading texts in. the field), has issued a warning: “Once a presumption takes root that online speech is dangerous, that will inevitably spread beyond individual users’ posts to online communication generally, including the press.”
Smith’s analysis of the Bill on his blog highlights what he says is a question that has dogged the Online Harms project from the outset: what does it mean by safety and harm?
The same question will dog Faafoi and his officials when they confront the need to define ‘hate’ and ‘harm’.
The British Bill defines “harmful” as follows: “the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities.”
Under New Zealand’s existing Harmful Digital Communications Act, harm means “serious emotional distress” but that will be an inadequate definition within the framework of hate speech laws. The Cabinet paper notes there is no universally accepted legal definition of hate speech. The term does not yet appear in New Zealand legislation. Instead our laws are couched in terms such as “speech that is likely to incite others to feel hostility or contempt”.
The danger in both Britain and New Zealand is that the definition will be either too wide (the UK has widened its meaning over the course of the Bill’s development) or too vague. Either way, it could lead to the use of the law for purposes beyond what was originally intended. Expect to see the word ‘hate’ rear its ugly head time and time again.
How it is interpreted will have major impact on freedom of expression and of the media. The broader and more subjective it is, the greater the likelihood of harm in the wrong places. Graham Smith doubts the British have cracked it.
The portents darken when you combine that with the central premise of the legislation that corporate platform providers are responsible for policing content. There are an estimated 24,000 of them in the UK.
The British Bill goes to some lengths to recognise the need to protect freedom of speech, democratic discourse and media freedom. An entire section of the draft legislation is devoted to it, including exemptions for news media websites and complex mechanisms for the restoration of news content taken down on social platforms. In addition to removing illegal content, the platforms are also required to carry out a much vaguer function of protecting posts that are “democratically important”.
The Guardian’s UK technology editor, Alex Hern, says such provisions mean the proposed legislation “has become encrusted with artifacts of the all-consuming culture war” and by that he means the sort of battle for hearts and minds that is playing out in America. He points out that social networks will be forbidden from discriminating against particular political viewpoints and will need to provide equal protections to a range of political opinions, no matter their affiliations. This sounds remarkably like the clarion calls issuing from the US Republican Party and Donald Trump.
Graham Smith’s analysis of these provisions is that the imposition of moderation and filtering obligations raises the twin spectres of interference with users’ privacy and collateral damage to legitimate speech.
“The danger to legitimate speech arises from misidentification of illegal or harmful content and lack of clarity about what is illegal or harmful,” Smith says.
Britain’s draft laws have been three years in the making and it is fairly evident that many believe the government has yet to get it right.
Kris Faafoi has signalled a consultation process here on his hate speech legislative changes. He will need to consult the best brains in the country if he and the New Zealand Parliament are to avoid adding to the statute books something that can do as much harm as good.
The first step will be to decide what, exactly, we mean by ‘hate’ and ‘harm’.
I’ll end with an anecdote that demonstrates the difficulties.
About five years ago I was having lunch with an eminent broadcaster with a well-deserved reputation for holding power to account. He is also a man of considerable intellect and fair-mindedness.
He asked me to name a politician that I hated.
I said “That’s too strong a word.”
He replied: “You’re just a softie…come on, which politician do you hate?”
With uncharacteristic meekness I told him I didn’t ‘hate’ any of them.
The exchange shows that ‘hate’ is a word that should be used only after spelling out exactly what it means.
Thanks to NZ Herald cartoonist Rod Emmerson for permission to use his cartoon.