This was published 3 months ago
‘Honeypot’ profiles of 13-year-old girls were posted online. Predators flocked to one
Social media accounts attributed to 13-year-old girls amassed hundreds of messages from “suspicious” accounts in the span of a few weeks, receiving explicit images, sexual messages and requests for personal information.
But one account received five times more interactions from potential predators than another. The difference came down to a photo and a message that their parents managed their account.
Researchers from the University College London, led by PhD candidate Somaya Alwejdani, created “honeypot” profiles to test what factors led potential predators to message children.
The researchers set up four fake accounts, using an AI-generated picture of a girl or a generic avatar, and indicating whether a parent managed the account or not.
In two months, the accounts received 268 friend requests, likes, comments and messages from 136 accounts. More than two-thirds of the accounts contacting the girls were assessed as “suspicious”, such as adult accounts which only connected with children, accounts which contained or sent harmful or illegal content, such as porn or violence, and accounts which sent inappropriate or sexual messages and comments to the girls’ accounts.
Most suspicious accounts presented as men over the age of 20. Ninety per cent of suspicious accounts followed other children, and half contained or sent harmful or illegal content.
One account messaged a 13-year-old girl’s account daily, requesting multiple video calls and sending a phallic photo on day two of messaging. Others told the girls they were pretty. One account’s following list consisted of more than 5000 children and no adults. Another was reported after sending harmful content directly.
Researchers are unable to name the platform due to the study’s ethics approval.
The account with a photo and no parental supervision declaration received the most number of interactions, with 104 interactions from suspicious accounts. The account with a photo and parental supervision declaration was the next most popular, followed by the account with an avatar and no parental supervision declaration. These results were the same for interactions from non-suspicious accounts.
Worryingly, the platform also suggested the girl’s accounts connect with nearly three dozen suspicious accounts. Almost half of all accounts suggested by the site’s algorithm, as opposed to mutual friends or other new users, were suspicious.
The social media site has since introduced restrictions for users under 16, defaulting the accounts to private and limiting who they can receive messages from.
The preliminary study, which has yet to be peer-reviewed, was presented at the Applied Research in Crime and Justice Conference in Sydney by co-author Emeritus Professor of security and crime science Richard Wortley.
Wortley said social media networks needed to start identifying “red flags”, such as adult men who mostly followed children, earlier.
“One of the problems with the algorithms is they don’t kick in until the damage is done,” he said.
He said a significant concern was the suggested contacts presented to the girls’ accounts, based on mutual interests or friends: “Maybe that’s a strategy that perpetrators take, to increase the chance the algorithm matches them.”
The latest eSafety transparency report released last week found tech giants including Apple, Google, Meta and Microsoft were making minimal efforts to tackle online child sexual abuse.
Apple services and Google’s YouTube were not tracking the number of user reports they received of child sexual abuse occurring, and couldn’t say how long it took them to respond to those reports.
eSafety Commissioner Julie Inman Grant said safety gaps uncovered in 2022 had not been addressed.
“Companies aren’t prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,” she said.
In December, Australia will introduce a world-first social media ban for children under the age of 16. An eSafety survey found 96 per cent of children aged 10 to 15 used at least one social media platform, and about 70 per cent had encountered harmful content.
Australian Catholic University Institute of Child Protection Studies director Daryl Higgins said the findings highlighted the need for social media literacy education among young people, as well as greater regulation on networks.
“The most dire need that we have in Australia is not a ban, but for media literacy, education and prevention strategies,” he said.
Research co-authored by Higgins found that online abuse is prevalent among adolescents. A survey of 16- to 24-year-olds found nearly 18 per cent received sexual solicitation, mostly from adults not known to them.
Higgins also said tech companies held the burden of responsibility: “If they are creating algorithms that drive potentially vulnerable people to harmful content, then they should be held to account.”
If you or anyone you know needs support, call Lifeline on 13 11 14, Beyond Blue on 1800 512 348, Kids Helpline on 1800 55 1800.
Start the day with a summary of the day’s most important and interesting stories, analysis and insights. Sign up for our Morning Edition newsletter.
More: