NewsBite

Advertisement

This was published 7 months ago

This country wants to kick under-13s off social media. Will Australia do the same?

By Charles Hymas
Updated

Children face being kicked off social media under plans for tough new age checks unveiled by the United Kingdom’s telecommunications regulator on Wednesday.

Social media firms will be required to introduce robust checks, including the use of photo ID such as passports, to identify children using their sites and protect them from harm online.

The UK government wants tough new rules for social media use that would see children under 13 banned from platforms such as TikTok.

The UK government wants tough new rules for social media use that would see children under 13 banned from platforms such as TikTok.Credit: iStock

Platforms such as Facebook and Instagram, which require users to be at least 13 years old, will be expected to use the checks to enforce the age limits, a move that could result in millions of children being removed from social media sites.

Tech firms that fail to comply with the new regime, set out in a proposed code of practice by regulator Ofcom, will face fines worth up to 10 per cent of their global turnover – equivalent to £11 billion ($20.9 billion) for a company such as Meta, which owns Facebook and Instagram.

Loading

Secretary of State for Science, Innovation and Technology Michelle Donelan said the government would no longer tolerate a “Wild West” where children were “force-fed” violence, pornography, abuse and harmful content on social media.

“The measures Ofcom have set out would not just require platforms and search services to roll out robust age checks to shield children from age-inappropriate content,” Donelan said.

“They go even further, seeking to understand the impact of addictive features like ‘infinite scrolling’ feeds, addressing the devastating effects of damaging algorithms that – little by little – increase the harms to which our children are exposed.”

Self-declaration of age, which has allowed underage children to join social media sites, will be banned under the plans.

Advertisement

Ofcom lists ways in which social media companies could check the age of users, including requiring photo ID such as passports, using facial age estimation – where computer software is used to calculate a user’s age – or reusable digital ID services, in which an external company provides age verification.

Loading

The social media companies will also be required to configure their algorithms – which determine the content automatically promoted to users – to filter the most harmful content from children’s social media feeds. This is intended to cover content such as self-harm, suicide and eating disorders.

The move follows the death of Molly Russell, a 13-year-old from London who took her own life after receiving 16,000 “destructive” posts on social media encouraging self-harm, anxiety and suicide in her final six months.

Ofcom chief executive Melanie Dawes said companies would need to “tame aggressive algorithms” that pushed harmful content to children and introduce age checks to ensure that children only saw content suitable to their ages.

“You cannot prevent every tragedy, but I think we can prevent future deaths by the measures we are taking,” she said.

The draft code will be presented to parliament for approval next northern spring before being implemented. It will mean all services that do not ban harmful content and those where there is a higher risk of it being shared on their platform will be expected to implement “highly effective” age checks to prevent children from seeing it.

Ofcom said: “In some cases, this will mean preventing children from accessing the entire site or app. In others, it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.”

The regulator cannot impose minimum age limits on social media sites, but government sources said the age checks would enable social media companies to enforce their bans on under-13s.

Donelan called for a “zero tolerance” approach to underage children on platforms such as Instagram, Facebook, Snapchat and TikTok.

Loading

“If that means deactivating the accounts of nine-year-olds or eight-year-olds, then they’re going to have to do that,” she said.

Under the code, social media firms will also have to ensure that children can provide “negative feedback”, so algorithms learn what they should not send to them. Companies such as Google will be expected to have “safe search” settings, which cannot be turned off by children and filter out the most harmful content.

The Telegraph, London

Get a note directly from our foreign correspondents on what’s making headlines around the world. Sign up for the weekly What in the World newsletter here.

Most Viewed in World

Loading

Original URL: https://www.watoday.com.au/link/follow-20170101-p5h030