NewsBite

Nine News’s doctored photo of Georgie Purcell reveals AI’s big problem

Tech giants say humans must verify AI generated content, yet the temptation to take blatant shortcuts using the technology remains dangerously irresistible.

Nine News has apologies for the photoshopping of an image featuring Animal Justice Party MP Georgie Purcell which enlarged her breasts and expose her stomach, saying it was the result of automation.
Nine News has apologies for the photoshopping of an image featuring Animal Justice Party MP Georgie Purcell which enlarged her breasts and expose her stomach, saying it was the result of automation.

Finally, fake AI-generated news has come to Australia. In the nation’s first high profile case, Nine News staff automated the re-sizing of an image featuring Georgie Purcell, which ended up re-sizing the Victorian Animal Justice MP’s breasts.

The program went further, it also bared the politicians midriff, cutting a hole in her dress to create a revealing outfit.

Nine Melbourne's director of news Hugh Nailon.
Nine Melbourne's director of news Hugh Nailon.
The original image of Georgie Purcell.
The original image of Georgie Purcell.

Nine News Melbourne’s director of news Hugh Nailon — an industry veteran — apologised to Ms Purcell, saying the image was a result of “automation by Photoshop”. The program’s maker Adobe disagreed, saying such a task could not be completed without “human intervention and approval”.

Regardless, the image went to air, sparking international headlines and sexism fury — genuine news from fake news. It serves as a warning — if you take Nailon at his word — of employees not fully delegating tasks to AI.

It’s the message the tech titans make loud and clear. Microsoft named its AI-powered assistant Copilot, because it is designed to work alongside humans, not replace them. “We think the human is the pilot. Without the pilot, the co-pilot isn’t really as effective,” Microsoft global head of marketing for search & AI Divya Kumar said.

“We do think there is a huge responsibility on the person using the tool, regardless of the tool being able to do the verification.”

Microsoft global head of marketing for search & AI, Divya Kumar, says humans are key to ensuring the content generated by the company’s AI Copilot platform is accurate.
Microsoft global head of marketing for search & AI, Divya Kumar, says humans are key to ensuring the content generated by the company’s AI Copilot platform is accurate.

And yet the temptation to surrender tasks to AI in the quest to lift flatlining productivity — or take blatant shortcuts — remains dangerously irresistible.

Last June In New York, a Manhattan federal judge issued sanctions against two lawyers who cited fake ChatGPT-generated legal research in a personal injury case, blasting the attorneys for wasting the time of the court. It also wasted their clients’ time, with the lawsuit dismissed on the grounds it was untimely (hopefully they had a no win, no fee policy).

Shortly before the Manhattan brouhaha, a Texas federal judge ordered lawyers in his court not to use AI-reliant legal briefs. It’s easy to sigh and say ‘only in America’ but Nine’s evening news with the doctored image of Purcell proved otherwise.

With AI, seeing is no longer believing. And this is a big problem.

The ability to generate research, advice — even a legal argument — via a few simple verbal prompts has heralded the greatest technological advancement in decades.

Steve Hasker, chief executive of one of the world’s oldest media and information companies says generative AI has sparked a disruption bigger than the launch of personal computers, the internet, mobile and social media.

But, like humans, it is prone to bias and hallucinations. Last year Google’s AI chatbot backed the Indigenous voice to parliament as a “positive step”, praised Anthony ­Albanese as a “man of the people” and independent senator Lidia Thorpe for being a “strong and outspoken advocate”, adding she was “a role model for all Australians”. Meanwhile, it labelled Peter Dutton and Scott Morrison as “controversial”, spark­­ing concerns about “propaganda” from Big Tech.

Last year Google’s AI chatbot praised independent senator Lidia Thorpe for being a “strong and outspoken advocate”, adding she was “a role model for all Australians”.
Last year Google’s AI chatbot praised independent senator Lidia Thorpe for being a “strong and outspoken advocate”, adding she was “a role model for all Australians”.

Governments across the world are still struggling with how to best rein it in and prevent harm. Australian Securities and Investments Commission chair Joe Longo said on Wednesday: “no clear consensus has emerged on how best to regulate it”.

“Business practices that deliberately or accidentally mislead and deceive consumers have existed for a long time — and are something we have a long history of dealing with,” Longo said.

“But this risk is exacerbated by the availability of vast consumer data sets and the use of tools such as AI and machine learning which allow for quick iteration and micro-targeting. As new technologies are adopted, monitoring consumer outcomes is crucial.”

People make mistakes, and those working in the high-velocity environment of newsrooms are not immune. But when clear errors are made, corrections and even apologies must follow — which Nailon quickly made to Purcell.

More important, systems must in place to ensure such a blunder doesn’t happen again, and with AI, this must mean humans must check, check and double check the content it generates.

Originally published as Nine News’s doctored photo of Georgie Purcell reveals AI’s big problem

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.heraldsun.com.au/business/nine-newss-doctored-photo-of-georgie-purcell-reveals-ais-big-problem/news-story/da14484f0767f4fc19f4eecc0c18e8e8