NewsBite

Advertisement

This was published 2 years ago

How social media giants created a ‘paedophile paradise’

By Jordan Baker

A world-first insight into how social media giants are responding to online child sexual abuse has exposed what Australia’s eSafety boss condemned as a culture of wilful blindness in which companies ignore or make token attempts to monitor serious criminal activity on their sites.

As the volume of online child exploitation grows exponentially – reports ballooned from 3000 images in 1998 to almost 85 million last year – commissioner Julie Inman Grant said companies had not only turned a blind eye to the problem, but had “effectively created a paedophile’s paradise”.

Federal eSafety Commissioner Julie Inman Grant.

Federal eSafety Commissioner Julie Inman Grant.Credit: Rhett Wyman

Australia’s eSafety commission used pioneering laws this year to compel companies to reveal information that has been kept secret by companies and chased by governments internationally for up to a decade.

Some, particularly Apple, have been reluctant to monitor content for child abuse amid concerns that it invades users’ privacy.

Now, a report on the companies’ responses, published on Thursday, shows Microsoft’s video conferencing tool Skype, which is the most common platform for live-streamed child abuse, takes two days to respond to a report of abuse and does not use any of the available technology to detect new child abuse material unprompted.

Microsoft developed a technology called PhotoDNA, which can detect known exploitation content, but does not use it to check material stored in its OneDrive service, allowing offenders to escape detection unless they try to share the material.

Apple does not check for abuse material either in its cloud, or when it is shared via iMessage. It made the fewest reports of child exploitation of any tech giant last year, with just 160 instances reported to a US database, despite many of its 2 billion users having access to FaceTime.

The tech giant last week abandoned plans to use a new tool to check iPhones and iCloud photos for child abuse material because of a privacy backlash.

WhatsApp bans 300,000 accounts a month for child exploitation violations, but does not share details about the users it bans with stablemates Facebook or Instagram, even though abusers use multiple accounts.

Advertisement

Neither Skype, Microsoft Teams nor Facetime take any action to detect child exploitation material in live video streams. Inman Grant said the gold standard in using technology to detect images was Microsoft’s XBox, but questioned why it was not as active on other platforms.

‘The hubris we got was really gobsmacking ... (social media companies) were admitting to enabling crime of the worst possible kind.’

Julie Inman Grant, Australia’s eSafety Commissioner

Inman-Grant said none of the companies justified their failure to act. “It’s shocking to me that none of the video conferencing platforms are using any kind of detection technologies when we know livestreamed child sexual abuse is a growing crime,” she said.

“What this actually shows us – particularly with Apple’s latest announcement – is that not only are they turning a blind eye to crime scenes, they’ve effectively created a paedophile’s paradise where they can store photos without detection.

“The hubris we got in some of the responses was really gobsmacking. [This activity] is criminal in almost every country in the world. They were admitting to enabling crime scenes of the worst possible kind on their platforms, knowingly.”

Loading

Professor Hany Farid, one of the inventors of PhotoDNA, which identifies and removes child exploitation images, described the responses as predictable and disappointing. “The technology sector has not responded to this crisis with the urgency or resources that I think it should,” he said.

“I continue to be baffled as to why they are not more aggressively responding to these horrific crimes against children. So-called privacy-focused groups talk about the importance of privacy, seemingly unaware that when we are talking about tracking down [child sexual abuse material], we are talking about privacy — the privacy of the young victims.”

Information and Privacy Commissioner Angelene Falk said children required support online as well as offline. “As part of the Privacy Act review, we have recommended that all organisations handle personal information fairly and reasonably, which would ensure they consider the impacts of their information handling activities on children,” she said.

A Microsoft spokesperson said bad actors were becoming more sophisticated and “we continue to challenge ourselves to adapt”, adding that companies, communities and governments must continue to collaborate.

National Sexual Assault, Domestic Family Violence Counselling Service: 1800 737 732.
Kids Helpline: 1800 55 1800. To report online exploitation material and abuse, visit https://www.esafety.gov.au/

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in National

Loading

Original URL: https://www.smh.com.au/national/how-social-media-giants-created-a-paedophile-paradise-20221214-p5c666.html