‘No fear’: eSafety Commissioner, Julie Inman Grant takes X, Meta and Google to task
Australia’s online safety watchdog has “no fear or favour” for any tech bosses as she launches a scathing new report on the world’s top digital platforms.
NSW
Don't miss out on the headlines from NSW. Followed categories will be added to My News.
The eSafety Commissioner has vowed to do her job “with no fear or favour” as she releases a scathing new report that shows tech giants are failing to protect their users against online terrorism.
Last year eSafety Commissioner Julie Inman Grant sent transparency notices to Google, Meta, WhatsApp, X, Telegram and Reddit demanding to know how the platforms were tackling terrorism content being shared on their websites.
X is owned by Elon Musk who is closely involved in the new Donald Trump administration.
“I’m not trying to pick a fight with any individual,” she said when asked if Musk’s closeness to Trump had any implications on eSafety’s pursuit of the platform.
“When I see platforms are behaving in ways that are causing potential harm, I’m doing the job.
“It’s really about the company and X Corp … (Musk) is a visible leader in the space but I do my job with no fear or favour.”
According to the new report, social media and digital messaging platforms are failing to protect their users against terrorist and violent extremist (TVE) content with some companies taking up to 2.5 days to remove the dangerous material.
The report follows a West Australian teenager allegedly threatening online to do a “Christchurch 2.0” on a southwest Sydney mosque.
Meta-owned Threads took two-and-a-half days to respond to user reports of terrorist and violent extremist, WhatsApp took more than a day and Reddit took 1.3 days.
Despite being owned by the same platform, a person or organisation could open a WhatsApp account even if they were deemed as dangerous and banned on sister-sites Instagram, Facebook and Threads.
Ms Inman Grant said the platforms were acting too slowly on an escalating issue.
“This content can go viral in a matter of minutes or an hour. Not only is the slow reaction time … two-and-a-half days in this space is a lifetime,” she said.
The report also raised concerns over the industry’s failure to detect or prevent terrorists from live-streaming violent attacks.
Meta has no protections in place to detect live streamed violence on Messenger Rooms.
Concerns were also raised over the growing use of artificial intelligence to create hyper-realistic violent content.
Google received 258 user reports about terrorist and violent extremist content they believe was made by AI and 86 reports of AI-generated child abuse material, the report said.
It also found human moderators at many of the platforms did not speak all or most of the five languages other than English commonly spoken in Australian homes.
Australians most commonly speak Arabic, Cantonese, Mandarin, Veitnamese and Punjabi at home in addition to English.
Reddit and WhatsApp moderators were fluent in only one of the give top languages spoken in Australian homes while Telegram covered two out of five.
Ms Inman Grant said Mr Trump’s Truth Social was not included in the report because it was not raised as one of the problematic platforms by stakeholders.