We’ve acted on child abuse: big tech had to be dragged into line
For years we’ve seen big tech businesses throwing their weight around in Australia, challenging our approach to important issues such as harmful online content, child protection and even payment for news.
The stark headlines are nothing new, nor are they unique to us, and we should not be swayed by them.
Sovereign governments the world over are becoming familiar with the pattern of tech industry histrionics, brinkmanship and threats to withdraw services.
But when it comes to protecting the interests of our citizens, as Australia’s online safety regulator it is my role to make decisions in the public interest, not the industry’s, particularly when it comes to protecting our most vulnerable.
This week, eSafety’s industry standards, designed to reduce the flow of child sexual abuse material online, came a step closer when they were registered with the Australian parliament.
The standards cover what are known collectively as Designated Internet Services, such as apps, websites, cloud-based file and photo storage services, and some services that deploy or distribute generative AI models; along with Relevant Electronic Services, including dating sites, online games and messaging services.
The importance of this section of the industry can’t be overstated, as we know cloud-based storage and messaging platforms are routinely used by those with a sexual interest in children to store and distribute this abhorrent content.
While the standards focus on the worst of the worst online content, including the abuse of children and pro-terror material, resistance from industry during the public consultation this year was more robust than we expected, with some prominent quarters claiming the standards represented a step too far, potentially unleashing a dystopian future of widespread government surveillance.
Hyperbole aside, if you really want to know what a dystopia looks like, imagine a world where adults fail to protect children from vile forms of torture and sexual abuse, then allow their trauma to be freely shared with predators on a global scale. That is the world live in today.
The eSafety’s sister hotline in the US, the National Centre for Missing and Exploited Children, analysed almost 36 million reports of child sexual abuse material last year comprising about 55 million images and 50 million videos of child sexual exploitation. These are staggering volumes but the tip of the iceberg.
The Childlight Project recently estimated there are more than 300 million child victims of online sexual abuse globally, and since Covid eSafety has been seeing a year-on-year doubling of reports of child sexual abuse to our investigations branch.
While we hope these standards will have a global impact in breaking the cycle of online sexual violence against children and the re-traumatisation experienced by victims of child sexual abuse, they will not – as is claimed – require companies to indiscriminately search protected communications or personal information.
Nor will they require tech companies to break end-to-end encryption or introduce systemic vulnerabilities into its operation – we recognise that undermining security would not be in anyone’s interest.
But giving industry a free pass to do nothing is not an option. The standards are designed to encourage innovation and investment in technologies that improve safety.
We are challenging big tech, and the industry, to harness their collective brilliance, vast financial resources and sophisticated tools to address profoundly damaging content hosted on and distributed by their services.
It’s also worth remembering that codes developed by industry itself had failed; they did not include a wholesale commitment to deploy any of the commonly used privacy-preserving methods that detect known child sexual abuse material and known pro-terror material. Neither was there a solid commitment to invest in detection technologies or more robust reporting. There was even resistance by some to enforce their own existing policies.
Governments around the world are focused on creating a similar duty of care for tech companies to tackle illegal content. These standards represent an important inflection point for tech responsibility and accountability that we can build upon.
Another world-first feature of the standards is enhanced obligation to ensure riskier open-source AI models have guardrails throughout the ecosystem, preventing them being used by bad actors to generate harmful content such as deepfakes, including deepfake child abuse.
It is too easy to obtain and use AI software to create harmful content, as the Bacchus Marsh Grammar case in Victoria this month demonstrated.
We have received reports of AI-generated child sexual abuse material designed to harass and humiliate young Australians. Demanding safeguards for platform libraries and high-risk consumer-facing open-source generative AI programs, including “nudify apps” without controls to prevent their application to children, is necessary and timely.
The good news is that six of eight codes drafted by other industry segments are in force, including one covering search engines that didn’t initially account for the creation of synthetic child sexual abuse material with the integration of generative AI.
We have demonstrated that we listen to multiple stakeholders, consider feedback carefully and seek balanced, appropriate outcomes. I believe the standards submitted to parliament this week achieve that balance.
It is not unreasonable to expect tech companies to do everything within their power to prevent crimes against children playing out on their platforms, and we must hold the line.
Ultimately, under these standards it will be up to companies themselves to decide what technology they deploy and how they deploy it to tackle this issue – this is not prescribed by eSafety.
The companies will control the deployment and are empowered to firm up their policies to make clear that these requirements are firmly constrained to known child sexual abuse and pro-terror content. The “slippery slope” argument just doesn’t hold up here.
We can all agree there is an urgent need for the industry as a whole to start taking meaningful action in the interests of all children. The time to act is now.
Julie Inman Grant is the eSafety Commissioner.