NewsBite

Google search that reveals huge Aussie problem

Just four words entered into the search engine exposes a hidden world of horrors your children have probably already seen.

"I instantly knew I had made a mistake" | Let Them Be Kids

ANALYSIS

Houston, we have a problem. If you take the alarming step of heading to Google and typing in “social media child suicide”, you’ll immediately be served up a list of harrowing stories detailing instances in which children around the world have died.

In all of these stories, the tech platforms loom large.

Naturally, we need to be clear that it’s never one factor that causes a death by suicide; it’s always numerous factors.

Nevertheless, when a parent claims their beloved 11-year-old daughter died because social media addiction led directly to sexual exploitation, you sit up and pay attention.

Your heart breaks in half when you read about yet another little girl who watched a suicide video on Instagram and copied it.

It’s hard to sit with this, as a parent. You start wondering: What are my kids consuming online? How is it impacting them? What can be done to stop this?

The first question is easier to answer than the second or third one.

What we know from recent eSafety data is that 62 per cent of 14-17 years olds have seen negative online content.

What does “negative content” mean? From my work in this area, I can (sadly) tell you.

This includes things such as, but not limited to: suicidal ideation, self harm, drug taking, promotion of disordered eating and starvation techniques, child sexual exploitation, violent pornography, extreme racism, sexism, ableism and animals being harmed.

Just last night, my own teenage daughter told me last night she’d seen a kitten put into a blender on a social media app.

This week, along with parents from across Australia, we are calling on the federal government to raise the age limit at which children can access social media to 16 as part of a national campaign, Let Them Be Kids, to stop the scourge of social media.

One simple Google search reveals the plethora of horrors our children are being exposed to every day. Picture: Supplied
One simple Google search reveals the plethora of horrors our children are being exposed to every day. Picture: Supplied
Sixty-two per cent of 14-17 years olds have seen negative online content. Picture: iStock
Sixty-two per cent of 14-17 years olds have seen negative online content. Picture: iStock

Yet in the eSafety survey linked above, only 43 per cent of their parents were aware of this. That’s right: we do not know what’s going down on our kids’ phones.

Separate research from eSafety found the average age young people first encountered porn online was 13 years old.

By age 16, 86 per cent of young Aussies had seen pornography. Interestingly, young people are generally in favour of regulating online pornography for people under the age of 16.

Right now, South Australia is considering lifting the age that children can get onto social media to 14 years old. Some commentators are calling for this to be 16 years.

Premier Peter Malinauskas was quoted saying: “There has been much examination and consequential evidence to suggest that addictive algorithms are being used to draw young people in, in a way that their developing minds are just not capable to be able to deal with.

“The proliferation of social media is not just the concern about access to content that is not healthy, but even the excessive use of social media itself is attributable to mental illness.”

The issue is that every mental health expert you speak to – and every research paper you read – has a different take on this.

For example, child psychologist Michael Carr-Gregg told the ABC some time ago that developing brains “simply do not have the neurological maturity to manage their digital footprint”.

Makes sense. But after reading one academic paper after another on this issue, it’s by no means clear cut.

Many of those papers suggest the link between mental health issues and social media is unclear. One paper was a “meta” review (pun not intended) of 25 other reviews.

It stated: “When this meta-analysis analysed happiness, life satisfaction, and depression separately, it found that SNS [social networking sites] use was associated with both higher levels of wellbeing and ill-being.”

OK, that’s confusing.

Regardless of all this murkiness, it’s fair to say that parents don’t want their kids consuming all this horror and filth online.

There’s evidence from the UK that among children aged 8-12 who get around age requirements on social media sites, up to two-thirds had help from a parent. Picture: iStock
There’s evidence from the UK that among children aged 8-12 who get around age requirements on social media sites, up to two-thirds had help from a parent. Picture: iStock

But the question of exactly what to do to stop this is infinitely complex.

A number of European countries recently tried to implement age verification.

In other words, users of certain sites would have to prove they were over 18.

But this opens its own Pandora’s box. For starters, if you’re going to ask people to verify their age, who is going to collect that data and how will it be safely stored? Do we really trust the platforms with that job?

I can guarantee hackers will be dying to get to that treasure trove of personal information.

The other problem is teenagers. They love getting around authority (yes, as a parent of one myself, I can confirm!).

Many experts have noted that young people are smart and will just use VPNs to get around age verification. Or they’ll just go to sites that aren’t responsible and don’t require it – and potentially be exposed to even more extreme content.

And here’s the kicker. There’s evidence from the UK that among children aged 8-12 who get around age requirements on social media sites, up to two-thirds had help from a parent or guardian. That’s right. Their parents are helping them lie. Oof. It’s sticky!

The challenge of how to protect children online is a tricky one. Picture: iStock
The challenge of how to protect children online is a tricky one. Picture: iStock
Governments can force the platforms to comply with community expectations if they really want to to protect children. Picture: iStock
Governments can force the platforms to comply with community expectations if they really want to to protect children. Picture: iStock

It’s also a flat out odd behaviour on the parents’ part.

As psychologist and cyberpsychology educator Jocelyn Brewer said to me: “You don’t take your 13-year-old down to the RTA (Road Traffic Authority) and say: ‘Oh, my 13 year old is really ready to drive a car so they can they please have their Ls three years earlier than the legal age?’

“If we could get parents to understand the importance of delaying social media use, that would be where I would start.”

Professor Justin Patchin, who co-directs the US-based Cyberbullying Research Center, has been looking at the issue for more than 20 years.

He’s sceptical that banning children of particular ages will “solve any of the problems people think it will.”

“To begin with, it should be obvious that if a child wants access to an app, they will get it.

“Many children are experiencing mental health challenges, yes, but I’m not convinced social media is the main cause of that and the majority of kids — even those on social, are doing just fine.”

Instead, Dr Patchin throws the spotlight elsewhere: “If parents believe their child shouldn’t be on an app until a certain age, what is stopping them from enforcing such a ban themselves?”

Jocelyn Brewer tends to agree. In a blog post, she writes: “My experience over the last decade is that families who use intentional, informed and intelligent strategies to master their device use, have strong communication skills and stable authoritative foundations often avoid many of the digital disasters we fear the most.”

For many years I’ve wondered why we put the onus on the victims to do all the work – in this case young children and/or their parents.

Why not force the social media companies to have a legislated duty of care to users, so they have to keep us safe?

Author Ginger Gorman. Picture: Hilary Wardhaugh
Author Ginger Gorman. Picture: Hilary Wardhaugh

In Germany there’s a law called the Network Enforcement Act or NetzDG. It’s not perfect. But it does force the platforms to take cyberhate down within 24 hours – or face huge fines.

So governments can force the platforms to comply with community expectations if they really want to.

Meantime, Australia’s eSafety commission is pushing ahead with its own trial of how “a mandatory age verification mechanism for online pornography could practically be achieved in Australia”.

Hopefully, they are going to find some viable pathways through this maze and work out if and how it should be or can be done. As a parent, I’ll be watching with keen interest.

Ginger Gorman is a social justice journalist, cyberhate expert and author of the best-selling book Troll Hunting. She edits the feminist blog BroadAgenda at the Faculty of Business, Government and Law at University of Canberra.

Originally published as Google search that reveals huge Aussie problem

Original URL: https://www.couriermail.com.au/technology/online/google-search-that-reveals-huge-aussie-problem/news-story/5f9bf9005602dcfe2aea020015de38cc