The People v tech terror as eSafety watchdog takes on giants for failing to stop extremism and child sex abuse content
Tech giants face huge fines for failing to act on terrorism, violent extremism and child abuse material amid warnings AI is being weaponised for hate.
Tech giants Meta, Google, X, Telegram, WhatsApp and Reddit face tens of millions of dollars in penalties if they fail to act on terrorist, violent extremism and child abuse material, amid warnings that generative artificial intelligence is being weaponised to spread hate and disinformation.
Australia’s eSafety Commissioner Julie Inman Grant has issued legal notices to six social media and encrypted messaging platforms in response to a surge in reports about harmful content spread by terrorists and violent extremists.
The rapid rise of AI and amplification of targeted algorithms have also ramped up pressure on tech companies to put safeguards in place to stop terrorists, extremists and sexual predators fanning disinformation and illegal content.
Telegram is the top-ranked mainstream platform linked with terrorist and violent extremist material ahead of Google’s YouTube, X, Meta-owned Facebook, Instagram and WhatsApp.
Telegram is being used by networks of terrorists and violent extremists, including ISIS sympathisers and the neo-Nazi network known as Terrorgram. Experts say the platform is being used to spread propaganda, recruit members and agitate for real-world acts of mass violence. Users of an Islamic State forum have also been identified as comparing the AI attributes of Google’s Gemini, ChatGPT and Microsoft’s Copilot.
The regulatory crackdown comes amid heightened tensions between the Albanese government and Mark Zuckerberg’s Meta, the parent company of Facebook, after the US tech giant walked away from payment-for-content deals with Australian media companies.
Under the Media Bargaining Code devised by former treasurer Josh Frydenberg, Meta and Google agreed to deals with media companies in 2021 worth close to a combined $250m a year for Australian news publishers, which were due to expire in the second half of 2024.
Ms Inman Grant – who has taken Elon Musk’s X to the Federal Court after the social media platform ignored requests to outline measures to detect and remove child sexual exploitation and abuse material – warned that generative AI and multimodal capabilities would make a toxic situation worse.
“Human beings find infinitely creative ways to misuse technologies, even ways that we can’t imagine,” Ms Inman Grant said. “What could be worse than the images and videos that our investigators are seeing of beheadings … people in their place of worship as they’re being shot?”
The eSafety Commissioner said “shock-value propaganda” deep fakes, ranging from bombings to beheadings, could be produced as “hyper-realistic” content.
“You can imagine a number of really concerning scenarios because we know also (with) algorithms and the way that they amplify and content goes viral … by the time some of that content could go viral it can be taken as truth,” she said.
Using transparency powers under the Online Safety Act, Ms Inman Grant has given the tech companies a 49-day deadline to provide responses on how they are responding to the risk of terrorism and online radicalisation. This follows a similar series of legal warnings last year in relation to child sex exploitation and abuse material.
Ms Inman Grant said that under X’s new leadership, it had cut 80 per cent of its safety engineers, 50 per cent of its content moderators and 80 per cent of its public policy teams engaged with governments.
X Corp was issued with an initial $610,500 infringement notice last September. Civil penalty proceedings against the company began in the Federal Court in December.
Ms Inman Grant said the company now faced steeper penalties.
“The important thing to remember here is it was a fairly minor infringement notice that they could have paid for,” she said. “They could be fined, depending on what the court finds, up to $782,000 a day from the time they were found to be out of compliance, which was March last year. This could be in the tens of millions of dollars.”
The 2019 Christchurch mosque shootings, which were live-streamed on Facebook for 17 minutes, the Halle synagogue shooting in Germany and the Buffalo mass shooting in the US underscored how online services would be exploited by violent extremists, Ms Inman Grant said.
Buffalo shooter Payton Gendron’s manifesto cited Reddit as the key platform that fuelled his radicalisation towards violent white supremacist extremism.
An EU Internet Forum study last year found that X recorded the highest rates of findability and algorithmic amplification of terrorist and violent extremist material.
Facing rising threats of radicalisation and public safety danger, Ms Inman Grant said eSafety continued to “receive reports about perpetrator-produced material from terror attacks … reshared on mainstream platforms”.
“We remain concerned about how extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material.
“We are also concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to cause harm.
“The potential to combine the shock value of terrorist propaganda and really grievous images and videos that you can’t unsee, (when) you combine that with misinformation and disinformation … you really have a toxic mix of really problematic content if the right guard rails aren’t in place.”
Ms Inman Grant said that despite tech companies signing up to the Global Internet Forum to Counter Terrorism and Christchurch Call, they had not provided answers to basic questions and remained evasive in what actions they were taking.
The regulator wants tech companies to explain the systems, processes, resources and technologies they have in place to deal with “organically created propaganda material now or through end-to-end encrypted services to share and organise terrorist acts”.
Ms Inman Grant said Chinese-owned TikTok was included in legal notices sent to X, Google, Twitch and Discord last year around action on online child sexual abuse. Under the new terror and extremism legal notices, Telegraph and Reddit were also sent questions around child sexual abuse content.
She flagged that further notices would likely be issued.