Social media spiralling into dystopia
Despite the best efforts of Facebook, Google and Twitter, video of the Christchurch massacre slipped through the net.
As Brenton Tarrant drove away from the New Zealand mosque where he allegedly went on a first shooting spree, only 10 people were tuned into his live broadcast of the rampage on Facebook Live, according to archived versions of his page.
But the video, which shows dozens of people inside the Al Noor mosque in Christchurch being gunned down, has probably been viewed millions of times in various formats across the internet. The footage was recorded, repackaged and reposted on mainstream sites, fringe destinations with looser restrictions and in the web’s darkest corners accessible only with special software.
Facebook and YouTube have worked to keep the video off their own sites.
The vast cloning of the footage underlines a stark reality in the era of live online broadcasting: these videos can’t be cut off.
Artificial intelligence software isn’t powerful enough to fully detect violent content as it is being broadcast, according to researchers. And widely available software enables people to instantly record and create a copy of an online video. That means the footage now lives on people’s phones and computers, showing how little control the major tech platforms have over the fate of a video once it airs.
Facebook acted, too late
Archived versions of Tarrant’s Facebook page indicate the video was removed only minutes after it stopped airing, according to social media intelligence company Storyful, which viewed archived versions of his page.
Facebook said it was alerted by New Zealand police shortly after the livestream began.
But by then, it was too late. Anyone with a link to the video could have recorded it.
Facebook said that in the first 24 hours after the attack, it blocked or removed 1.5 million copies of the video from its site. About 80 per cent of those videos were cut off while they were being uploaded to Facebook. That means 300,000 versions of the video still got through.
YouTube says it has taken down tens of thousands of postings of the video and Twitter says it suspended Tarrant’s account and was working to remove the video.
Facebook, Twitter and YouTube have stepped up their investments in artificial intelligence tools and human moderators to detect and remove content that violates their guidelines.
These internet giants have made progress in stamping out terrorist propaganda from Islamic State militants, for example. They employ a shared database of terrorist content that is assigned digital fingerprints called “hashes” that detect visual similarities and automatically prevent the content from being uploaded.
Tricks of the trade
But these tactics can be circumvented if the footage is doctored. Versions of Tarrant’s video, for example, were edited to imitate a first-person shooter game and appeared on Discord, a messaging app for videogamers.
The software can also struggle to catch videos that aren’t the original, such as a recording from a mobile phone camera of the video playing in a web browser.
To try to tackle this, Facebook says it is employing audio technology and is also hashing other uploads. Live online broadcasting, which is exploding in popularity, compounds the problem because it is challenging to monitor in real time. The algorithms aren’t yet equipped to decipher reality from fiction, or to detect certain moving images like guns held in different positions.
Active on the fringe
Over the weekend, links to different versions and clips of the rampage were readily available on multiple fringe sites such as Gab, BestGore.com and DTube, an alternative to YouTube that has little to no moderation. Links on the storage site Dropbox were being circulated and were still active on Sunday.
The video was given a supercharged boost because of the way Tarrant promoted it. Before the shooting spree, Tarrant apparently posted his alleged intention to attack the mosque, and provided links to the livestream and an accompanying manifesto filled with white supremacist conspiracy theories on 8chan, an anonymous messaging forum favoured by extremist groups.
Part of the calculation, say internet researchers, was to take advantage of 8chan’s culture of archiving sensitive videos. By giving a heads up to the 8chan community about the attack and then posting a link to the livestream, Tarrant ensured the video couldn’t be permanently deleted.
Says Joan Donovan, director of the Technology and Social Change Research Project at Harvard University’s Shorenstein Centre: “You have these groups of people who consider themselves quasi movements online and they believe they own the internet and as a result these calls to action are almost rote memory.
“They’re just part of the culture, so if someone says that they’re going to commit some kind of atrocity then you will see this downloading and reuploading practice happen.”
That could explain why, YouTube says, the site has seen an unprecedented volume of attempts to post original and modified versions of the shooting video. A YouTube spokeswoman says that if it appears a criminal act will take place the website will end a livestream and may terminate the channel showing it.
YouTube now has 10,000 workers devoted to addressing content that violates its rules.
On Sunday, New Zealand’s government emphasised it was a crime to distribute or possess the video because it considers it objectionable under law.
NZ teen risks prison
Yesterday, a Christchurch teenager was charged with distributing the mosque attack livestream.
The 18-year-old, who cannot yet be named, has been denied bail, but police say he was not involved in the attack. He faces two charges: one of sharing the live-stream and a second for posting a photograph of one of the mosques attacked with the message “target acquired”, along with other chat messages “inciting extreme violence”. If found guilty, he could face up to 14 years in jail.
Facebook appears to have become quicker at removing live broadcasts after facing criticism it took too long after other violent incidents. In 2017, a man in Thailand broadcast the murder of his infant daughter and the video remained on the site for about 24 hours.
This time, Facebook says its content policy team immediately designated the New Zealand shooting as a terrorist attack, meaning any praise or support of the event violates the company’s rules and is removed.
But it is still having to improvise. Facebook says it initially allowed clips and images showing nonviolent scenes of Tarrant’s video to stay up, but it has since reversed course and is now removing all of his footage.
Highlighting the game of Whac-A-Mole, a two-minute clip of the video showing people getting shot was still viewable on Facebook as of Sunday afternoon.
The Wall Street Journal