Self regulation of fake news is not working, research says
The voice referendum has been the target of fake news on social media platforms, which are not enforcing their own standards with regard to misinformation, new research suggests. reset misinformation
The voice referendum has been targeted by false claims of being “illegal” and “fraudulent” on social media platforms, with new research showing none of the major technology companies surveyed are enforcing their own guidelines around “fake news”.
A rapid, small sample size investigation recently conducted by not for profit big tech watchdog Reset.Tech Australia found that the vast majority of social media posts the organisation reported to the platforms, which Reset.Tech says contained “clear electoral process misinformation” were not removed or flagged as problematic.
“This content largely centred around claims that Australian elections had been rigged, that ballots had or would be stolen, or that the voice referendum vote was invalid or illegal,” a Reset.Tech report says.
“None of the platforms are effectively enforcing their community guidelines, nor are they implementing meaningful responses based on their requirements under the Australian Code of Practice on Disinformation and Misinformation.
“The majority of the misinformation content reported to the platforms is still available online and is unlabelled at the time of publication. Further, this content continues to grow in views.’’
Microblogging site X, formerly Twitter, was the worst offender, Reset.Tech says, with none of the 50 posts reported being removed and the majority also not labelled – X’s preferred response – in the two weeks after they were reported.
“As far as this rapid experiment could detect, X was the worst performer in this experiment in terms of meeting its commitments to users as outlined in its community guidelines,” Reset.Tech says.
The 24 posts monitored on Meta’s Facebook centred around claims that Australian elections were rigged or were going to be rigged and that ballots had been stolen, “or contained calls to boycott voting because it was treasonous”.
Just one of the posts reported by Reset.Tech was labelled in the two weeks after reporting and the majority of posts remained available and unlabelled.
On TikTok one of 25 posts was removed before it was reported and eight after, making it the best performer among the three platforms.
However, Reset.Tech said the majority of posts were still available and did not live up to the site’s stated community guidelines.
Reset.Tech said it conducted the research quickly in order to be able to make a submission to the consultation on the draft of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill, with that consultation period now closed. The government is considering expanding the Australian Communications and Media Authority’s (ACMA) power to regulate social media under the draft legislation, following a government review of major technology companies’ compliance with the current voluntary code of conduct around online misinformation.
ACMA has recommended it be given record keeping and information gathering powers, and also wants to reserve the right to introduce industry standards in the future, which would have stronger enforcement mechanisms and higher penalties for noncompliance.
Reset.Tech said while the need to conduct its research quickly resulted in a small sample size, “it suggests there may be a significant problem”.
“While platforms have published their guidelines regarding their definitions and management of misinformation and disinformation and provided users with tools to report violative content, they fail to respond adequately to these user-reports.
“This suggests that these provisions of the code are having little meaningful impact.
“Platforms do not adequately respond to user reports of electoral process misinformation and disinformation in the way that they claim to do so in their community guidelines, despite being ‘aware of’ this content.”
The submissions made regarding the draft legislation have not yet been made public.
Concerns have so far been raised by both the Australian Human Rights Commission and the Law Council of Australia that there is potential for legislative changes to limit the implied right to free expression.
“There are inherent dangers in allowing any one body – whether it be a government department or social media platform – to determine what is and is not censored content,’’ the AHRC says.
“The risk here is that efforts to combat misinformation and disinformation could be used to legitimise attempts to restrict public debate and censor unpopular opinions.
“Striking the right balance between combating misinformation or disinformation and protecting freedom of expression is a challenge with no easy answer.”
TikTok director public policy Ella Woods-Joyce said its trust and safety team was vigilant about enforcing its community guidelines which clearly state they do not allow misinformation around electoral processes.
“This includes misinformation about how to vote, registering to vote, eligibility requirements of candidates, the processes to count ballots and certify elections, and the final outcome of an election,” Ms Woods-Joyce said.
“In relation to the current referendum, we work closely with the Australian Electoral Commission, the Electoral Integrity Assurance Taskforce, and other community partners, and have established dedicated reporting channels for them to use.”
Meta was contacted for comment.
X changed its policy early this year to responding to all media requests for comment with a poop emoji, however the press email now autoreplies “busy now, please check back later”.