We can’t keep waiting for social media to grow up
When the Black Saturday bushfire tragedy struck in 2009 Instagram didn’t exist, Twitter was an obscure “microblogging” experiment and Facebook was a fun site for bored university students to poke each other.
So we learned about Black Saturday and formed our views based almost exclusively on traditional media reports — newspapers, television, radio.
This year, millions of Australians and billions more people around the globe, are receiving most or all of their information about the summer bushfire crisis from social media.
This is a seismic shift. And it’s one we need to do a better job of understanding and guiding. On the one hand we should recognise the significant good that would not have been possible in 2009.
People were reassured their loved ones were safe. Ordinary stories of survival and bravery that might never have been known — mothers rushing their children to safety or firefighters working the frontlines — have gone viral.
Celeste Barber, an Australian who gained Insta-fame for posting parody shots of herself mimicking sexy celebrity photographs, launched a heartfelt bushfire appeal on Facebook and swiftly racked up more than $50m from around the globe.
But what we have also witnessed are telling examples of how misinformation can now spread just as fast as flames.
Photos and graphics doctored to misrepresent the crisis were shared by celebrities to millions, shaping views and manipulating emotion. A platform such as Facebook, of course, is “truth-agnostic”. The platform’s survival-of-the-fittest algorithm rewards engagement, not accuracy.
But beyond the confusion that was sometimes created for those in the midst of danger, why should we care?
Does it really matter if the image of the girl in a gas mask clutching a koala in front of a wall of flames was a fake? Or that a supposed NASA photo of the country almost completely ablaze from space was actually an enhanced composite image? Or that the photo of a distressed family under a jetty was from Tasmania in 2013?
Given the list of bushfire concerns we now have, caring about these inaccuracies seems like an almost laughably minor worry.
But part of our reflection as we move from response and relief to recovery should absolutely include the way social media operated. Because just as no single tonne of carbon released into the atmosphere creates climate change — no single piece of misinformation erodes faith in objective truths. It’s cumulative.
If you’re constantly bombarded with fake, how can you know what’s real? Scepticism is healthy, but the proliferation of false information on social platforms fuels chronic cynicism. As this cynicism and relativism grows, conspiratorial answers start looking more appealing, and make us ripe for manipulation.
We are beginning to understand that during elections filter bubbles isolate us from others. It seems these bubbles envelop us in times of emergency too.
So, while you may have not seen #ArsonEmergency trending on Twitter or that the fires had been deliberately lit by authorities along the path of a proposed high-speed rail corridor, other people’s bubbles were overwhelmed with it.
The common thread with the malicious content that gets amplified by the algorithm is that it tells us there’s someone to blame for this disaster, and thus fuels our fear and anger.
This is the reality of platforms’ algorithms operating in an unregulated attention economy. The algorithms have learnt that the more outrageous, conspiratorial, or inflammatory the content, the more we engage and the longer we stay on the platform. There is a name for this type of persuasion design. It’s called captology, and in this context it’s being used not for information dissemination, but addiction.
None of this is to suggest these platforms are inherently evil. But we should have both eyes wide open when assessing how they are affecting society so we can decide what our social expectations are.
Every new technology that is adopted en masse demands a sober assessment in the public interest. Whether we are talking about motor vehicles, television, or knives, it’s completely appropriate for government to apply guard rails in the community’s best interest.
Indeed, this is exactly what the community expects. Last month, a national Roy Morgan poll found that 73 per cent of Australians believed the federal government should set up an independent regulator to ensure digital platforms act in the public interest.
Little surprise perhaps, given that three-quarters of those surveyed in the same poll believed that social media was, overall, increasing division and polarisation in our society. As Responsible Technology Australia spokeswoman Pru Goward succinctly puts it: “It’s time for social media to grow up.”
In the meantime, social media platforms might consider engendering a bit of public goodwill by harnessing their data-driven capacity to serve up niche, targeted advertising and use it to deliver timely, accurate emergency services information.
Platforms could partner with ABC’s emergency service broadcasting and other public warning systems to deliver relevant information directly into our news feeds.
A regulatory framework that recognises the impact of these platforms and mandates they operate with the public interest core to their operations could help with this. Such a framework should not be about stopping people from sharing their views or their experiences, but rather recognising that at times of importance personal experience must be balanced with accurate information.
This is an area in which Australia has led. The Prime Minister took globally well-regarded steps when he looked to crack down on violent extremist content in the wake of Christchurch. Our catastrophic bushfire season should give the government the impetus to maintain its momentum.
Amit Singh is director of Responsible Technology Australia that advocates for the ethical progression of digital technology. He is the former global head of economic policy at Uber.