Coronavirus lockdown sparks rise in people accessing child abuse content
Isolation and working from home during the coronavirus crisis have contributed to a shocking increase in this horrifying compulsion.
COMMENT
Here’s the big myth: that you need to go onto the dark web to view child sexual abuse material (CSAM). Actually, no.
Just log onto Twitter and you can see plenty of child abuse material. Users regularly posting images and videos. Sometimes, they get taken down quickly. Often they don’t.
Just take 19-year-old Twitter user and CSAM survivor, Avri Sapir. Whenever she finds an image of herself being abused as a child, she posts about it. And she posts all the time.
On April 26 this year she tweeted:
I’m tired. I shouldn’t find photos of myself as a child being raped when I’m just scrolling through my Twitter feed. I shouldn’t have to go looking for images of my own abuse because platforms refuse to do the bare minimum to police their own content. This isn’t my job.
RELATED: Online child abuse escalates in Victoria
RELATED: ‘He forced himself onto me after five days’
If you want to get an idea of the scale of the problem, Britain’s Telegraph newspaper reports that data from the Internet Watch Foundation shows Twitter is responsible for almost half of the child abuse material that UK investigators found that is openly hosted on popular international tech sites.
The newspaper reports: "Statistics from the Internet Watch Foundation (IWF) show that 49 percent of the images, videos and url links it found on social media, search engines and cloud services in the last three years were on the social network, making up 1,396 of the total 2,835 incidents found.”
For a survivor like Avri, the risks are high. CSAM victims suffer lifelong consequences including (but not limited to): depression, anxiety, feelings of shame, problems with self-esteem and trouble maintaining intimate relationships.
CSAM differs from other forms of childhood sexual abuse in a key manner. When images are taken and distributed online, the trauma never ends. The victim always wonders whether more images of their abuse are circulating somewhere. A recent survey of 150 adult CSAM survivors found that one third are contacted, harassed or threatened by people who have seen their images.
Before the virus COVID-19 started its rapid spread around the globe, Minister for Home Affairs Peter Dutton gave a speech on March 5 stating: “ … every five minutes a web page shows a child being sexually abused. Australia, I’m sorry to say, contributes to the epidemic of child sexual abuse.”
Other experts also noted the rapid proliferation of CSAM. Last year WePROTECT – a global alliance fighting the online sexual exploitation of children – issued a startling threat assessment predicting a “tsunami” of online child sexual abuse material.
RELATED: ‘Nauseating’: Inside the mind of a troll
Then came COVID-19. As experts in this field, we’re watching closely and expecting an exponential jump in the numbers of people creating and accessing CSAM this year. Why? Because the COVID-19 pandemic has seen all of us isolation at home, and therefore using our home computers more. Our bosses aren’t breathing down our necks – so dark impulses, or even just curiosity, can play out in terrible ways.
And indeed, there’s emerging evidence to confirm this fear. Previously unreleased data from the Australian eSafety Commission shows that child sexual abuse material reports to their office increased by 82 per cent in May this year and 97 per cent in June 2020 compared to the same months last year.
In the 2019-20 financial year, eSafety’s Cyber Report team finalised investigations into 13,359 individual items of content depicting child sexual abuse. It compares to approximately 8,000 items in 2018-19. This represents a 60 per cent increase in CSAM investigations over the past year. This is mirrored around the world.
On the surface of it, it’s hard to understand why there’s not more outrage about this. Where’s all the investigative reporting on this issue? But maybe we’re getting immune to the horror of this.
A platform like Pornhub, serving up largely free content, is hotbed of child sexual exploitation. Many commentators have suggested that this has served to desensitise hordes of viewers to all sorts of violent and non-consensual sexual behaviours, including sexual violence against children.
Meanwhile, Twitter says it has “zero tolerance towards any material that features or promotes child sexual exploitation”.
The experts disagree this is what’s actually happening in practice. Essentially: that’s just corporate words. In a fascinating research paper from 2017, the Pew Center in the US canvassed more than 1,500 tech experts from around the world. Over and over again, these experts explained that technology companies have little incentive to reign in uncivil behaviour on their platforms because it doesn’t serve their profit motives.
In the report, Frank Pasquale, professor of law at the University of Maryland stated: “The major internet platforms are driven by a profit motive …. Whatever behaviour increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”
Although if you think about Avri’s story, or even how long it took to take down the video of the Christchurch massacre from Facebook live (29 minutes), we’re not so sure about even the egregious cases.
In short, these monolithic tech companies make billions of dollars from our data and pay almost no corporate tax. They have the best engineers and programmers in the world working for them. If they wanted to fix CSAM on their platforms, they would.
A separate paper just published by University of Auckland academics shows Australian tech executives believe that filtering child sexual abuse material is an impediment to financial growth and should be left to police and market forces. Despite creating a platform where child abuse thrives, these publishers want someone else to fix it after it occurs.
In developing their own CSAM filter, Australian tech companies used the Interpol blacklist – rather than the AFP blacklist – because it was limited to depictions of children under 13 being sexually assaulted. (As you’d hope, the Australian legal definition of CSAM is much broader than this. The platforms are choosing not to apply it because once again, it doesn’t suit them.)
Noting the grave harm done to children like Avri, we the undersigned urgently call upon Twitter to protect children from harm on their platform. Words are not enough. We want action.
And if Twitter and other social media platforms can’t or won’t protect our kids, we call upon the Government to legislate a duty of care to keep children safe.
This article is signed by Ginger Gorman, cyberhate expert and author of the best-selling book, Troll Hunting, Natalie Walker, CEO of PartnerSPEAK, Michael Salter, Scientia Associate Professor of Criminology, UNSW, and Josh Bornstein, Principal Lawyer and Director, Maurice Blackburn.