Misogyny, racism thrive online: eSafety commissioner
Social media systems are designed to maximise engagement, risking the amplification of the most harmful content, says Julie Inman Grant.
Social media systems are designed to maximise engagement, risking the amplification of the most harmful content, says Julie Inman Grant.
The eSafety Commissioner has raised the alarm over the algorithms used on social media platforms to maximise engagement and says they are pushing Australian children towards extreme content and views that could “spill over” into the real world and see them commit acts of hate and even terrorism.
Algorithms are used by Facebook, Instagram, TikTok, Twitter and YouTube to sort through vast amounts of online data and present content that is most relevant to users and will keep them engaged for longer.
But eSafety Commissioner Julie Inman Grant said the same systems designed to maximise engagement also risked amplifying the most harmful content that could open up young people to radicalisation.
“The systems are designed to be sticky and to keep people on the platform and conflict and rage … tend to sell,” Ms Inman Grant said.
“But this may draw people in through shocking and extreme content which could then normalise prejudice and hate by continuing to amplify content and views that are misogynistic, homophobic, racist or extreme.”
READ MORE: Why young men love violent and viral Andrew Tate
Ms Inman Grant said extreme content on mainstream services could also act as a pathway to underground communities such as those made up of “involuntary celibates”, also known as incels, with whom the Christchurch terrorist was associated.
She said she didn’t consider it a “hypothetical” that being exposed to harmful content online could lead to young people perpetrating violence down the track.
“I don’t even think it’s way down the track, we see cases here in Australia where young teenagers are already being radicalised,” she said. “I don’t think it’s hypothetical anymore.
"I think we’re seeing these racist, homophobic, misogynistic views spilling over into real world harms.”
Ms Inman Grant said tech companies needed to open the “black box” where they kept the secrets about how their algorithms worked, usually under the argument they were commercial in confidence.
“There is so much opacity in terms of these algorithms … which are the secret sauce of these different sites,” she said.
“What we really need to see is much greater transparency about what particular algorithms have been designed to achieve, the data that fuels it and the outcomes for users.”
On top of concerns about radicalisation, Ms Inman Grant said content relating to self-harm, suicide and body image had the potential to place children in “real physical danger”.
It follows figures from eSafety released earlier this year that found almost two thirds of young people aged 14 to 17 were exposed to seriously harmful content relating to violence, suicide and drug taking.
There was also evidence from the US that algorithms fed predatory behaviour, with perpetrators served up sexualised videos of children and a forum to engage with other predators through the comments sections of such content.
The eSafety Commissioner sent out legal notices to companies including Apple and Meta in August compelling them to reveal what they were doing and not doing to detect and prevent the proliferation of child exploitation material on their platforms.
In a position paper released on Wednesday, the eSafety Commissioner urged major social media platforms to be more transparent with how their algorithms worked and allow people to opt out of such systems.
Ms Inman Grant said if the companies didn’t engage with the eSafety Commissioner or respond to the recommendations and questions put to them, then “more blunt force” would be used such as the notices sent out in August compelling them to provide information and strategies about how they were responding to the online harms caused by their algorithms.