World-first codes about to start putting big tech on notice over child abuse, terror content
The eSafety Office this week gains world-first powers to go after big tech companies which fail to remove or stop the distribution of child abuse and terror content.
World-first online safety codes will come into force from Saturday allowing the eSafety Office to investigate, pursue and enforce action against Big Tech giants and related internet companies that fail to prevent the distribution of child abuse and terror content.
Australians will also be able to report to the eSafety Office any tech companies that fail to respond to their requests for that content to be removed in an appropriate manner and time frame.
The implementation of the new codes, which are the result of two years’ worth of consultations with industry bodies, will be “world leading” and show significant “efforts to make the online world safer for Australians,” said eSafety commissioner Julie Inman Grant.
“I think today more than ever, the Australian community expects the online industry to take all reasonable steps to prevent their services from being used to store, share and distribute horrific content like child sexual abuse and terrorist material,” she said. “Unfortunately, these steps have not always been taken. These mandatory codes give those sectors covered by the codes a clear and agreed blueprint for how they tackle this illegal content.”
Reports of child abuse material had increased significantly, Ms Inman Grant said, adding: “We see a doubling of reports year on year.”
Terrorism-related content was also on the rise and the public had witnessed a rise in graphic content related to global conflicts, Ms Inman Grant said.
“Frankly, between the Ukraine conflict and the Hamas-Israel conflict, people are very aware of the potential for terrorists and violent extremist material being posted online,” she said. “(People) want to know that companies are actively doing something to understand the risks and proactively mitigate those risks.”
The new codes are enforceable on social media services, internet carriage services (also known as internet service providers), equipment providers, app distribution services and hosting services that fail to remove child sexual abuse material and terror content.
Another code concerning the same violations from search engines will also come into effect next year. Ms Inman Grant said that code, previously written alongside the five in action from Saturday, was sent back to big tech companies and associations amid the mass rollout of artificial intelligence. “When the search engine code was delivered to us there was nothing about the creation of synthetic child sex abuse material, terrorist propaganda or generative AI at all,” she said.
Ms Inman Grant said she told tech companies “I can’t register this because it would already be obsolete”. She said that, once returned, the code was accepted and would be in place from March 12. Some larger tech companies, including Meta, would fall under multiple codes.
While the five codes would allow the eSafety Office to pursue companies from Saturday, Ms Inman Grant said the office would “ease in” at first with regulatory guidance.
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout