Violent content escapes TikTok ‘uglification’ ban
TikTok was once accused of suppressing videos of people who were too fat, thin, poor, had ‘ugly facial looks’ or ‘crooked mouth disease’ or were ’senior people with too many wrinkles’.
Social media platform TikTok was once accused of suppressing videos of people who were too fat, thin, poor, lived in slums, had “ugly facial looks” or “crooked mouth disease” or were “senior people with too many wrinkles”.
Claimed internal documents asked moderators to impose a lifetime ban on those inciting the independence of Northern Ireland, the republic of Chechnya, Tibet and Taiwan and the “uglification or distortion of local or other countries’ history such as May 1998 rights of Indonesia, Cambodian genocide, Tiananmen Square incidents”.
When documents with several pages of these guidelines surfaced in March in The Intercept, TikTok said “most of” the guidelines were “either no longer in use, or in some cases appear never to have been in place”.
Nevertheless, you would expect the social platform was well equipped to quickly remove any offensive video, such as the graphic video of 33-year-old US veteran Ronnie McNutt taking his life. Posting scenes of suicide is among a list of “bloody scenes and violence” in the alleged guidelines and would prompt the video to be removed.
TikTok last year said it had increased its video review effort to detect violence but McNutt’s video was shared thousands of times on TikTok last Sunday.
He had streamed his death on August 31 on Facebook; it was subsequently reposted on TikTok, Instagram and Twitter.
Experts say the issue is not TikTok’s intention to remove such a video; it’s the platform’s ability to vet out these videos before they are shared thousands of times in a few moments.
One solution is to include a prominent report button next to the video feed in the TikTok app that lets users trigger an instant human review. If TikTok gets hundreds of reports about a video in a few seconds, they know something is awry.
You can report videos on TikTok, but the report option is hidden as one of many choices after pressing the “share” button. That’s counter-intuitive.
Cameron Edmond, of the Expanded Perception and Interaction Centre at University of NSW said moderation teams typically searched for hashtags and keywords and filtered content based on that.
Some video detection may not be sophisticated enough to flag violence footage embedded in a child’s video, as occurred in the TikTok case. “You’ve got to remember that these algorithms still are generally trained on very heavily curated data sets. So it‘s actually very easy to fool them.”
He said it may be impossible to screen out some unacceptable content as soon as it is posted. An alternative is to vet it before it goes live. “Until we‘ve got computers that are, that fast, until we’ve got systems that can run that fast, we can’t necessarily achieve that.”
Lee Hunter, ANZ general manager of TikTok said the violent content posted on Sunday had been distressing and a clear violation of TikTok’s community guidelines.
* To discuss concerns about mental health, please contact Lifeline on 13 11 14.