Kiwi kids on social media

Lumo_4.5x7_Static_Pornography-1 Tuesday Commentary dinkus

Updated: Stuff’s bold clarion call.

Moralists invented hell so they could inflict cruelty with a clear conscience, according to Bertrand Russell. I think technocrats invented social media so they could inflict untold damage with no conscience at all.

Yesterday, Stuff put a hold of all activity with the social media giant Facebook and its associated platform Instagram as a reaction to that indifference.

The trial applies across all titles owned by New Zealand’s largest news publisher. It is a big call: Nearly 953,000 people follow Stuff’s news Facebook page, 134,000 follow its Instagram account, and it has dozens of other Facebook pages for its various titles and brands. Those users, however, have a clear alternative – Stuff’s own platforms.

Stuff ceased advertising on Facebook after it carried footage of the Christchurch mosque attack that was live-streamed by the shooter on a fringe platform. Yesterday’s move was another principled stand in response to Facebook’s abysmal record on hate speech. Stuff should be applauded for its bold move – likely to be the first of many under its new owner Sinead Boucher – and its should act as a clarion call for other media companies to follow its example and give the multinational the fright of its life.

Last week, more than 500 companies reacted to the proliferation of hate speech on Facebook’s pages following the Black Lives Matter campaign and joined the Stop Hate for Profit advertising boycott that could put a sizeable dent in the social network’s $US70 billion in annual ad revenue. They rightly judge that the only way to force Facebook and its ilk to show real corporate responsibility is to hit them hard in the pocket. However, for the umpteenth time, Facebook offered ultimately self-serving ‘solutions’ to yet another problem of its making.

Stuff and the advertisers were reacting to a rising tide of public anger but another insight into Facebook’s attitude to the harm inflicted by it and other social media platforms was provided last month in a report on how content is moderated.

The report by New York University’s Stern Center for Business and Human Rights showed that less than three per cent of material removed by Facebook related to harmful content such as nudity, violence and hate speech. Of the 3.7 billion items removed, more than half were spam and most of the remainder related to fake accounts. Although the latter includes so-called ‘sock puppets’ that spread information (or misinformation) through multiple accounts, both spam and fake accounts can interfere with the data analysis and programming through which lucrative advertising is targeted at individuals.  Of the 100 million pieces of harmful content removed, less than 10 per cent was hate speech, while only 2.3 million related to bullying and harassment – and we’re talking here about content generated worldwide by 2.6 billion users.

Part of the reason for such low levels of ‘cleaning’ is the fact that Facebook relies heavily on artificial intelligence to do its policing. In fact, almost all of the child pornography and violence and close to 90 per cent of the hate speech removed from the platform is captured by machines. And it outsources much of its remaining responsibilities. You can read the Stern Center report here.

Now, before you stop reading because all of this is remote and doesn’t directly affect you, let me bring it home. I’m talking about your children, grandchildren, nephews and nieces, and the kids next door here in New Zealand.

Last week the Broadcasting Standards Authority and NZ on Air released the results of a Colmar Brunton survey of children’s use of media in this country. Part of that study examined how our young use social media. It contains a wealth of information about their exposure to the phenomenon starting with how they access it.

More than half our children aged between six and 14 have access to smartphones while around 60 per cent have access to tablets and computers. Of course, access increases as children get older so, in the vulnerable 12-14 age group, access is higher.

Social media use doubles in percentage terms after age 11 and over 60 per cent of 14-year-olds use Instagram. More than a quarter of them use Facebook.

More than 80 per cent of children use the Internet on a typical day and in that 12-14 age group most do so unsupervised. Almost three-quarters have seen something on the Internet that bothered them, yet less than half their parents employ filtering software or parental controls. This may be due to the report’s finding that more than half trust their children to make the right online choices and a quarter did not know how to use parental controls.

They are certainly trusting when it comes to their children’s social media accounts: less than a quarter of parents have access to them. It appears most of oversight is limited to checking their children’s online profiles and lists of friends, although only around 40 per cent do so.

In any event, oversight has its limits. Instagram, Facebook and Snapchat all have private messaging services. In the case of Snapchat the messages disappear by default after they have been read. And six per cent of children use What’sApp, Facebook’s fully encrypted messaging service.

The Children’s Media Use study was not charged with a detailed analysis of the harm that social media can cause: It was primarily concerned with what media children access and how they do so. Within the research, however, is a social media subtext of vulnerability. It shows how exposed children are to social media, the potential for exposure to harmful content due to the nature of services they access, and the limits of parental control. The report can be accessed here.

Last month the government stepped up its efforts to shield children and young people from online harm. The Keep It Real Online campaign is a joint effort by the Department of Internal Affairs, Netsafe, the Office of Film and Literature Classification and the Ministry of Education. You may have seen the commercial where two porn stars turn up at a teenager’s door or another in which little Laura outs a bully whose taunts have gone viral.

Such initiatives are important – and this campaign devised by the agency Motion Sickness may be particularly successful – but it is high time the world sheeted home responsibility to where it should lie.

Yes, the libertarians will say that responsibility lies with the users of social media and, of course, people should be responsible for their own actions. However, we know full well that there are perverted minds, malcontents, and cowards who become brave behind the shield of anonymity. They will not take responsibility. Nor can we expect children – whose faculties of social awareness and propriety are still developing – to take full responsibility.

We can expect the providers of these services to take responsibility.

TikTok, a social media app used by about a third of New Zealand children and owned by a Chinese company called Bytedance, may be showing the way albeit in response to a hefty fine over breaches of US child privacy laws. It no longer allows children under 13 to register without parental permission and has added a feature that lets parents remotely set restrictions on their children’s accounts. The new feature, called Family Pairing, allows parents to link their children’s accounts to their own, where they’ll be able to disable direct messages, turn on restricted content mode, and set screen time limits. TikTok also has a privacy mode that restricts connections to a child’s friends.

The British government has moved to impose controls on social media to protect children and vulnerable adults. In February it released a White paper that signalled its intention to establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services. A regulator would oversee the new law and there would be a new “Safety by Design” framework to help companies incorporate online safety features in new apps and platforms from the start. Unfortunately, Covid-19 has delayed implementation of the legislation, which the UK government claims will be ‘a world first” in child protection.

Slowly, and I hope inexorably, governments are moving toward regulation of social media giants that have regarded themselves as untouchable. The multinationals, Facebook in particular, have a collective mindset that seems incapable of breaking free from that bulletproof mentality but break free they must. Self-interest may be the principal driver of corporations in general and the issue of public safety is one that a number have faced with varying degrees of probity. Few, however, would knowingly risk the wellbeing of children.

 

 

 

 

 

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.