NewsBite

Advertisement

How would a social media ban actually work?

By Paul Sakkal, David Swan and Tim Biggs

Planned new Australian laws would block children from social media to guard them from the mental and social harm linked to apps such as Instagram, TikTok, Snapchat and Discord.

With almost a quarter of children aged eight to 10 using social media at least once a week, and half of 13-year-olds doing the same, the government is responding to parental discomfort about the technology that has spread from apps like Facebook to games like Roblox.

The Albanese government plans to ban children from accessing social media, but has not said how its proposal would work.

The Albanese government plans to ban children from accessing social media, but has not said how its proposal would work.Credit: Aresna Villanueva

But similar plans have faltered overseas and there is still scant detail about how the proposal would work.

Here’s what we do know.

Why has Labor announced a social media ban?

After indicating his intentions in September, Prime Minister Anthony Albanese said on Thursday that his government would move to ban children under 16 from using social media apps, pointing to growing evidence that teen mental health and social isolation are worsening. Previously, the state governments of South Australia, Victoria and NSW have expressed support for such an idea.

“The fact is that young women see images of particular body shapes that have a real impact in the real world. And young men through some of the misogynistic material that they get sent to them, not because they asked for it,” Albanese said at a press briefing.

“If you’re a 14-year-old kid getting this stuff at a time where you’re going through life’s changes and maturing, it can be a really difficult time.”

Advertisement
Loading

Full details of the legislation have not been given, but with only two weeks of parliament left and the opposition in general agreement about a ban it could become law by the end of the year. But rather than the burden of complying with the law falling to families or kids themselves, who Albanese said would not be punished for flouting the ban, the burden will be placed on the tech companies that operate the platforms.

The government will take up to a year to decide key details of its plan, including how to treat online gaming platforms that include chat features.

How will it work?

Tech platforms will be required by law to take reasonable steps to ensure children under 16 are not using their services. Currently, many platforms self-regulate to prevent users under 13, with many merely asking users to type in their age. Others, like Meta, have more complex systems in place to detect age.

The eSafety commissioner will be in charge of determining what these reasonable steps are, and will issue regulatory guidance about how to take them.

The government has not said exactly what methods the eSafety commissioner will recommend, but it has previously put forward a “double-blind tokenised approach”, whereby information would be provided to a verifying third party that would certify the user’s age to social media platforms without revealing details about the child. This could involve all Australians being made to supply IDs or biometric data to prove their age. The government is currently undertaking trials on the technology.

Some tech giants, including Meta, have argued that the app stores run by Apple and Google should be made to hand user ages over to social platforms, since they already collect them.

La Trobe University’s professor Therese Keane said social media was dangerous and a move like this was overdue.

“Social media companies cannot and will not self-regulate their platforms. It is a very encouraging and brave step the government has made to take control,” she said.

“Social media companies have been given ample opportunities to address these issues but it is not in their best interests to do so. These companies need to face the reality that their products have a negative impact on children.”

Which platforms are affected?

In the current Online Safety Act, a social media platform is defined as an electronic service that satisfies the following conditions:

  1. The sole or primary purpose of the service is to enable online social interaction between two or more end users;
  2. The service allows end users to link to, or interact with, some or all of the other end users;
  3. The service allows end users to post material on the service.

The broad wording of the definition means that even chat apps built for kids, one-way content apps like YouTube, and social-focused games could be captured. However, the government has said that there would be a framework to exempt apps that the eSafety commissioner considered to be low risk.

The government argues that would incentivise platforms to eliminate risk in order to not be block for under-16s, but the federal opposition has said it does not want exceptions.

Communications Minister Michelle Rowland said the children’s version of YouTube, known as YouTube Kids, was a “probable candidate” for exemption but YouTube itself would fall into the category of an age-restricted service.

Will it actually work?

No country has brought in a system like the one that the eSafety Commissioner is proposing, showing how technically difficult it would be. There are also concerns that simpler systems could be easily bypassed.

In June, tech giant Meta deployed an age verification system from technology company Yoti for Australian Facebook users trying to edit their age to gain access. Users were asked to prove their age by uploading a video selfie or submitting an ID, but one reporter was able to buy a knife online by using editing software to digitally age the photo of a nine-year-old girl.

Are there other problems?

A ban on social media for children under 16 could result in even more surveillance of users.

“My god, banning people under 16 from using social media will not work,” digital rights activist Meredith Whittaker said in a recent interview. “It would basically be creating a system to monitor internet usage at a whole population scale because you can’t know that somebody is over 16 without checking everyone.”

What are other countries doing?

According to an eSafety Commissioner report, many countries are moving towards age limits.

Austria, Chile, Cyprus, Italy, South Korea, and Spain (age 14), Czech Republic, Greece, Serbia and Vietnam (age 15), and Aruba, Croatia, Germany, Ireland, the Netherlands, and Slovenia (age 16) are working on age limits but enforcement varies.

Loading

In 2019, the UK ditched a plan to stop under-18s viewing porn, after delays and worries about whether the technology would work. Some US states, including Florida and Louisiana, have moved to block teens from porn and social media.

What have the tech giants said?

Facebook owner Meta has called for the onus to be on app stores and parents. Its global head of safety, Antigone Davis, has said Apple and Google, which control smartphone app stores, should be accountable for who is allowed to download apps.

“Parents should be able to approve their teens’ app downloads, and we support legislation that requires app stores to get parents’ approval whenever a teen under 16 downloads an app,” Davis said in Canberra earlier this year.

Snapchat said in a recent submission to a Senate committee that it supports device-level age verification as the best option. “Age collection is already part of the device ID process when registering a new device, such as an iPhone or Android phone,” it said.

What role will school and parents play?

There are already tools for parents to manage screen time and help keep children safe online.

Apple’s Screen Time is a set of parental controls built into iPhones that allow parents to manage their child’s iPhone or iPad remotely and set limits on how much time the child can spend on specific apps or on the device more generally. For Android devices there is a similar free app called Google Family Link. Another app, Qustodio, allows parents to monitor and manage kids’ activity on all devices and filter certain websites and apps.

Loading

There are calls for the government’s legislation to include mandated digital literacy training programs as part of the standard curriculum, which might include education for students about privacy, cybersecurity and online risks.

“We would also like to see government-led campaigns to educate parents who face a tricky battle against strong-willed tweens and teens,” said Martin Kraemer, executive at cybersecurity firm KnowBe4.“Big tech and social media have evolved faster than parents have been able to act, so parents need more support including clear guidelines on the best ways to monitor and restrict their kids’ screen time and content.”

What’s missing from the debate?

There are some strong benefits of social media, particularly for non-binary young people, according to ANU Professor Ben Edwards, who recently undertook a social media use study of 20,000 15 and 16-year-olds.

“Social media may give young people capacity to connect with a tribe that they can’t otherwise connect with at their school. So if there is a ban, I’d be thinking about the young people who are most likely to be disadvantaged by it, and what additional supports can be put in place to support them?”

Cut through the noise of federal politics with news, views and expert analysis. Subscribers can sign up to our weekly Inside Politics newsletter.

Most Viewed in Politics

Loading

Original URL: https://www.smh.com.au/link/follow-20170101-p5k99z