Social media under fire in Christchurch

Published on the 21/03/2019 | Written by Heather Wright


social media_Christchurch murders

This time, the platforms need to face up to being part of the solution…

As families begin to bury their loved ones in the wake of Friday’s mosque shootings in Christchurch, the spotlight is glaring on the social media platforms that played a role in publicising the violence and continue to provide an ongoing forum for the hate-filled ideology behind it.

The figures have been reported ad nauseum: Facebook removed more than 1.5 million videos of the shooting rampage within 24 hours of the attack – 1.2 million of which were blocked at upload. The livestream, originally broadcast via Facebook Live, quickly proliferated on other sites, with the footage available for hours after the attack – and reportedly still available should you wish to go down some of the darker rabbit holes of the internet.

“We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms.”

The lone terrorist behind the attack had designed it from the offset to get maximum coverage, knowing the shocking footage would go viral across the communities in which he trolled. Just minutes before the attack, a link was posted by the killer on the fringe message board 8chan (which dubs itself ‘the darkest reaches of the internet’ and a ‘less tame’ version of close relation 4chan, both of which harbour extremist views under anonymity) directing people to the Facebook livestream. In the video itself, the gunman references a popular YouTube channel, a tactic entirely about grabbing attention and accelerating the spread of the video.

The use of music popular in certain circles of the internet further drove not just the spread of the video, but discussion on it and its ideology – and not just on the darker reaches of the internet. YouTube videos of the songs themselves are now filled with divisive, hate filled commentary about the attacks, as well as links to the gunman’s ‘manifesto’, which he happily uploaded prior to the attacks and which similarly spread across Twitter, Facebook and Reddit.

From the offset, social media platforms – run by some of the richest and most tech savvy companies in the world – were behind the eight-ball. While the gunman’s Facebook and Twitter accounts were quickly shut down, footage from the attack spread rapidly. YouTube likened the situation to playing ‘whack-a-mole’ with videos being uploaded at an ‘unprecedented’ rate.

The problem was so great that YouTube abandoned using human moderators, instead allowing AI algorithms to unilaterally block the videos in an effort to speed the process.

All the platforms were quick to say they were working to remove the content, with Facebook noting it was also removing any praise or support for the crime and the shooter as soon as it became aware of it.

Meanwhile, Spark, Vodafone and 2degrees all agreed to block customers from accessing three overseas websites – including the 8chan and 4chan anonymous message boards –  which had provided access to the video and where the gunman’s actions were being regaled.

In the aftermath two things have become clear. Firstly, there is no clear evidence of warnings that if heeded might have prevented the attacks. And secondly that the social media platforms have entirely misdirected their efforts to prevent the type of sickening content that festers and incites such violence.

A question of scale, algorithms and incentive
All the main platforms use automated tools to identify and remove content that violates their self-imposed rule books.

Facebook, Twitter and YouTube are founding members of the Global Internet Forum to Counter Terrorism (GIFCT), sharing a database of ‘hashes’, or unique digital fingerprints, for violent terrorist imagery or terrorist recruitment videos identified for removal. The consortium also includes Google, Instagram, Reddit and LinkedIn, among others.

The GIFCT says it shared digital fingerprints of more than 800 visually-distinct videos related to the Christchurch attacks via its collective database, along with URLs and context.

But it wasn’t, and isn’t, enough.

YouTube says it removed 49,618 videos and 16,596 accounts for violating its policies in the fourth quarter of 2018. Another 253,683 videos were removed for violating policies on graphic violence.

But there’s a deep irony in YouTube’s and Facebook’s struggles with flagging content. They can instantly and automatically flag nudity and copyrighted music or movies. Isis content too, and that promoting Islamic radicalisation are unable to even be uploaded. And yet uploads and live streams of assault-rifle-wielding Australian white men seem to pass through just fine. Why is that?

Once the automated content filter fails, the addition of the hash system, first used to combat child porn, is fraught. Firstly, it relies on a human moderator to add it, and once added it is simply a matter of creating a new version to get around the algorithm. And we saw that in bulk last week, with users recording the live streamed video on their own devices to create a new version – with a new hash-free digital fingerprint.

Others took screenshots from the live stream, or shared shorter versions of the video, added watermarks or changed the music, or colour, or size of the video, or even added animated people into the video to deceive the algorithms.

Let’s be clear here, users were actively attempting to avoid the action of content moderators. Much like cybersecurity, the bad actors are rapidly improving their abilities.

The question has resonated on the targeting and training of such algorithms. Have these Western companies trained them only to alert to Islamic imagery and audio signals, ignoring repeated incidents from threats much closer to home?

Compounding the problem is that algorithms also played a part in pushing the images even more widely as search volumes on New Zealand, mosques and the gunman’s name told the platforms this was trending content we all wanted to see.

The platform companies have been quick to point out the issue of scale. We’re talking about companies which are, for all intents and purposes, the equivalent in scale of nation states. It’s scale that makes them money. And it’s also scale which can be used as a key excuse – they’re just too big to police content effectively.

The trouble is US regulators have effectively enshrined the platforms’ stance in law. Section 230 of the Communications Decency Act says platform companies don’t have responsibility over the content users post – and that’s not something the platform companies want to see changed. In the absence of any regulatory incentives, companies avoid deciding what is ‘harmful content’, and with no penalties for getting it wrong, they respond commercially by reducing moderation to a politically defendable minimum. And so Jihadi war cries and Hollywood blockbuster filters prevail as the low hanging fruit to prove their efforts.

With Christchurch, the global pressure on the platforms is growing, helped by Spark managing director Simon Moutter, Vodafone NZ CEO Jason Paris and 2degrees CEO Stewart Sherriff, who, after using their own networks to ban access to specific sites last week, have since hit out at the platform companies, saying they have a legal duty of care to protect users and wider society by preventing the uploading and sharing of content like the video.

“Although we recognise the speed with which social network companies sought to remove Friday’s video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online,” the three said in an open letter to the global CEOs of Facebook, Twitter and Google.

“We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms.

“Technology can be a powerful force for good. The very same platforms that were used to share the video were also used to mobilise outpourings of support. But more needs to be done to prevent horrific content being uploaded. Already there are AI techniques that we believe can be used to identify content such as this video, in the same way that copyright infringements can be identified. These must be prioritised as a matter of urgency.”

The three are also arguing for more onerous requirements on companies with regards to the most serious types of content, such as content featuring terrorist activity, suggesting requirements such as those proposed in Europe, including requirements for take-downs to be done within a specific period, proactive measures and fines for failure to do so, could be warranted.

Under New Zealand law, the Censor’s Office has classified the live stream video as ‘objectionable’ and a 22-year-old has been charged under the Films Videos and Publications Classification Act for sharing the video, with the Privacy Commissioner now calling on Facebook to notify police of the names of others who shared the clip.

Prime Minister Jacinda Ardern has also said she’ll be talking with the social media companies about what can be done.

The world, standing united against such perversities, awaits their collective willingness to be as much a part of the solution as they are the problem.

Questions or comments...

  1. Andrew Pengelly

    This note from the censor’s office is the best decision ever and I’m glad it came out so quickly.

    “Under New Zealand law, the Censor’s Office has classified the live stream video as ‘objectionable’ and a 22-year-old has been charged under the Films Videos and Publications Classification Act for sharing the video, with the Privacy Commissioner now calling on Facebook to notify police of the names of others who shared the clip.”

    Perhaps we could encourage people to think before sharing material by highlighting the fact that they might be charged under censorship laws. Earlier marketing around copyright laws I believe has reduced illegal sharing by simply making people think before they act.

    Reply

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere