NewsBite

Big tech misinformation code set for review as fake news rises

The existing voluntary code of practice has been criticised for being ‘opt-in’ amid accusations the tech giants aren’t doing enough to fight online conspiracy theories.

Facebook removed over 180,000 pieces of content relating to harmful health misinformation from Australian pages or accounts in 2021. Picture: AFP
Facebook removed over 180,000 pieces of content relating to harmful health misinformation from Australian pages or accounts in 2021. Picture: AFP

The Australian lobby group representing tech giants including Meta, Google and Twitter will review its code of practice on disinformation and misinformation, a week after the companies self-­reported a spike in fake news and conspiracy theories on their platforms.

The voluntary code was launched by lobby group DIGI in February 2021 and has so far been adopted by eight signatories – Apple, Adobe, Google, Meta, Microsoft, Redbubble, TikTok and Twitter – who have each made commitments to combating online misinformation.

The two core requirements for signatories are that they commit to the core objective of providing safeguards against harms that may be caused by disinformation and misinformation, and that they release an annual transparency report.

The code has been partly criticised by Australia’s communications regulator ACMA, however, for being an “opt-in” rather than an “opt-out” model, and that it doesn’t apply to private messaging or news aggregation services.

Critics including Reset Australia and the Centre for Responsible Technology have also called for the code to be mandatory for ­online platforms, rather than voluntary.

“It is good that the major tech companies are recognising the need to take some responsibility for the content that drives their networks,” Centre for Responsible Technology director Peter Lewis said.

“But the foxes are still in charge of the henhouse. We need binding standards for platforms that place real obligations on the big tech companies to manage the safety of their networks.”

Reset Australia director of tech policy Dhakshayini Sooriyakumaran called DIGI’s self-regulation model “laughable”.

“If DIGI are serious about cracking down on the serious harms posed by misinformation and polarisation then it should join Reset Australia and other civil society groups globally in calling for proper regulation,” she said.

“We need answers to questions like ‘How do Facebook’s algorithms rank content?’ (and) ‘Why are Facebook’s AI-based content moderation systems so ineffective?’ ”

DIGI has released a discussion paper and is now accepting public submissions as part of a planned review of the code, amid a rise in online misinformation.

ACMA has oversight of the code’s implementation and prior to the election the former government agreed to the watchdog’s recommendations that it be granted new regulatory powers including information gathering and reserve code-making powers.

Research from ACMA found 82 per cent of adult Australians reported having experienced misinformation about Covid-19 over the past 18 months, with 22 per cent of those reporting experiencing “a lot” or “a great deal” of misinformation online.

ACMA chair Nerida O'Loughlin.
ACMA chair Nerida O'Loughlin.

DIGI managing director Sunita Bose said the organisation accepts in principle the five recommendations for improvement from ACMA.

“The goal of the code is to ­incentivise best practice by ­technology companies in relation to misinformation and disinformation, through driving greater transparency, consistency and public accountability,” Ms Bose said.

“Hearing the views of academics, civil society and other Australians through our previous public consultation helped DIGI shape the code, and now we want to hear from them again as to whether it needs to be amended.”

Questions posted by DIGI in its discussion paper include whether the code should be extended to include private messaging services; and whether the code meets the needs of industry and the community to balance concerns about misinformation and disinformation with the need to protect freedom of expression online.

It comes after transparency reports released last week showed the arrival of the Delta coronavirus strain and severe state government lockdowns led to a spike in misinformation on Gen Z social media platform TikTok, with thousands of Australian videos removed over the past year.

A month-by-month breakdown shows the number of ­Australian-made medical misinformation videos removed from TikTok skyrocketed by 582 per cent in September 2021, with 4476 breaches up from 656 a month earlier. Thousands of videos were removed every month thereafter.

The reports highlight the spread of fake news, propaganda and conspiracy theories amid the pandemic.

The growth in medical misinformation removals trended alongside factors directly related to Covid including the arrival of the Delta strain, government measures to manage infections and the parallel rollout of the vaccination program.

YouTube reported that it removed more than 5000 videos uploaded from Australian IP addresses for having content related to dangerous or misleading information, with over 70,000 videos removed globally.

Facebook, meanwhile, removed more than 180,000 pieces of content relating to harmful health misinformation from Australian pages or accounts in 2021.

DIGI is accepting public submissions to inform potential changes to the code until July 18, 2022.

Originally published as Big tech misinformation code set for review as fake news rises

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.themercury.com.au/business/big-tech-misinformation-code-set-for-review-as-fake-news-rises/news-story/5fab4d434dbd33af9d23f90a345e1268