‘Biggest threat to humanity’: Sinister truth behind Melbourne mayor’s AI fail
An Aussie mayor’s creepy AI fail has gone viral – but according to Caleb Bond, it hides a very sinister truth we all need to wake up to.
Online
Don't miss out on the headlines from Online. Followed categories will be added to My News.
COMMENT
Artificial intelligence is all fun and games until someone dies.
Well, not a real person – not yet, anyway.
Melbourne Lord Mayor Nicholas Reece this week posted to X four images of would-be new parks, announcing that he would build 28 of them if re-elected.
They all looked rather green and sunny and fun – until you saw what appeared to be a dead man with extra limbs sprawled facedown in a playground.
I know Johnny Cash sang about being hungover in the park in Sunday Morning Coming Down, but that seems to be going a little far.
There was also a floating shoe, a boy with a long T-shirt and no trousers and a woman with a rather concerning lump growing from the side of her leg.
I, personally, will not be supporting any new parks in Melbourne unless they include mutant dead bodies as standard.
The images were presumably produced with AI technology and Mr Reece made light of the errors by photoshopping in a big robot, flying car and black panther, saying “you guys should have seen the originals”.
It’s all a bit of a laugh.
But what isn’t funny is that AI technology is working its way into everyday life at a rapid pace while it is still clearly deficient.
And once it has sorted out those deficiencies it will be the biggest power on the planet – more powerful than any human being – and will become the single biggest threat to humanity. That is not an exaggeration.
Artificial intelligence is currently only as good as the people who program it, which opens it up to all sorts of biases and errors.
Meta’s AI bot, for instance, recently told users that the first attempted assassination of Donald Trump was a “fictional event” and that “there has been no real assassination attempt on Donald Trump”.
Others seeking information about the shooting were told that the AI bot doesn’t “always have access to the most up-to-date information” – but it didn’t hesitate to give information about Kamala Harris’ campaign.
It is also prone to what experts call “hallucinations”, which is effectively when AI bots lie or make stuff up because it thinks that’s what you want to hear.
I see there's a bit of commentary about the renders for the 28 new parks I'll get built if I'm re-elected Mayor.
— Nick Reece (@Nicholas_Reece) September 23, 2024
You guys should've seen the originals! pic.twitter.com/GcpihfFiYW
I experimented with ChatGPT when it first gained notoriety and asked it whether I had ever appeared on ABC’s Q&A program (I haven’t).
The bot initially gave the correct answer but, when I pressed and said I thought I had, it apologised and nominated an episode on which I supposedly appeared with Malcolm Turnbull.
The dates, topic and guests ChatGPT mentioned all checked out – except that I was not on the episode.
I tried the same again with a different show on which I have never appeared and it again went to water when I disagreed.
It was telling me what it thought would make me happy, which, when you think about it, could be quite scary if used for a more nefarious purpose.
A survey published in the BMJ Health and Care Informatics journal recently found that a fifth of GPs are using AI to complete daily activities, including to ask for diagnoses or suggest treatments.
The number of jobs this technology could eventually replace is innumerable.
And the data being fed into these robots is unknowingly taken from us.
Meta recently admitted that every public photo and post on its platforms since 2007 has been fed into its AI bot.
Then LinkedIn, owned by Microsoft, was caught using user data to train its AI technology unless you opted out.
None of us who posted on Facebook back in 2007 ever expected our posts would one day be fed to a robot that will eventually try to take our jobs.
And the more that is fed to these robots, the smarter they get.
Sam Altman, the chief executive of OpenAI – which owns ChatGPT – in June told a US Senate judiciary committee that AI could “cause significant harm to the world”, that “if this technology goes wrong, it can go quite wrong” and that he was worried about “the more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation”.
AI will, one day, outsmart us.
Forget about climate change – this monster, designed by us, could end the human race much faster.
Originally published as ‘Biggest threat to humanity’: Sinister truth behind Melbourne mayor’s AI fail