NewsBite

Advertisement

ChatGPT will see you now: How AI is revolutionising healthcare

By Liam Mannix and Angus Thomson

The pamphlets began to pile up on Renata Bernarde’s desk: flyers for her new drugs, diagnoses she did not understand, lists of side effects. Each time she’d see a doctor, she’d come home with more desk litter.

“It’s all Greek to me,” Bernarde says. There were so many pills for her conditions – postural orthostatic tachycardia syndrome (dizziness when standing up), mast cell activation syndrome (anaphylactic-like symptoms), fibromyalgia (muscle pain and tenderness) – she started a spreadsheet.

AI is “too good for me to not use it”, says Renata Bernarde.

AI is “too good for me to not use it”, says Renata Bernarde.Credit: Luis Enrique Ascui

Many of her specialists are too busy to explain what’s happening to her or to talk to each other. Last year, one specialist did not inform the other about a diagnosis, and Bernarde ended up on the wrong medication.

That’s when Bernarde turned to ChatGPT. She started by simply asking “him, or it, or whatever” questions. Then she began uploading her medical history, doctors’ case notes, scans of pamphlets, her prescriptions spreadsheet and test results. She finds one prompt to be particularly effective: “Explain this to me like I’m eight years old. What are the risks?”

AI has become Bernarde’s tour guide in a world she otherwise could not understand. “It’s too good for me to not use it,” she says.

Bernarde is on the forefront of what some experts predict is a coming AI-powered revolution in healthcare.

Professor Declan Murphy of the Peter MacCallum Cancer Centre: “This is the new world we live in.”

Professor Declan Murphy of the Peter MacCallum Cancer Centre: “This is the new world we live in.”

This month, the first Australian patients began receiving AI-guided drug therapy for epilepsy.

“This is the new world we live in,” says Professor Declan Murphy, director of genitourinary oncology at the Peter MacCallum Cancer Centre. He’s seeing patients who have uploaded their scans and test results to ChatGPT and come in armed with their own diagnosis and treatment options.

Advertisement

“It usually describes correctly what we see,” he says. “It’s a totally different ball game to Dr Google.”

Online symptom checkers are often inaccurate. AI models appear to represent a big step up.

‘We live in a world that is way past “doctor knows best”.’

Dr Grant Blashki

In studies that have not yet been peer-reviewed, consumer AI models are passing medical exams and acing “clinical reasoning” tests. Perhaps most impressive is their performance on “long case” tests, where physicians are required to have long, complex conversations with their patients. But their MRI interpretation is more uneven, and they still “occasionally get it completely wrong”, says Murphy.

“In the past 10 years, one thing has become very clear: AI is already doing a better job than human doctors. There is very clear evidence on that,” says Zongyuan Ge, the director of augmented intelligence and multimodal analytics at Monash University’s Health Lab.

Meanwhile, Australia’s health system has never been under more pressure. Bulk-billing rates dropped to a low of 69 per cent in 2024 (they have since risen) and the average out-of-pocket fee for a visit to a GP is now $47. “We spend so much money with specialists,” says Bernarde. “And it’s disjointed advice.”

But medical bodies and AI developers urge patients to remain cautious. The AI models provide confident answers even when they are extremely wrong. They are not substitute doctors.

“To assess someone properly, they need to be sitting in front of us,” says Australian Medical Association vice president Julian Rait.

However, some practitioners have come to accept their patients are going to use the technology.

“We live in a world that is way past ‘doctor knows best’,” says Dr Grant Blashki, a GP who sits on the advisory board for the Alliance for Artificial Intelligence in Healthcare. “If it’s being looked at for general advice, but not your sole source of advice, and it’s combined with having a real doctor as well, I’m not against it.”

Loading

Blashki has even started recording simulated consults and uploading the video to an AI model so it can coach him.

“The interpersonal stuff is a bit creepy, but the factual stuff – did you ask the right questions and examine the right things? – is fascinating.”

Last week, Bradley Lark became one of the first people to potentially have his disease directly treated by AI.

Lark, a trainer at a high school, was last year diagnosed with epilepsy, the treatment of which is characterised by uncertainty. There are a wide variety of drug treatments, but doctors generally cannot tell with precision which will work for a particular patient.

“Most of the time, it’s a bit of a toss of the coin,” says Professor Patrick Kwan.

Dr Grant Blashki records simulated consults and uploads the video to AI so it can coach him.

Dr Grant Blashki records simulated consults and uploads the video to AI so it can coach him.Credit: Simon Schluter

Kwan and his team have built an AI model that ingests medical history, genetic data, MRI and EEG scans [to detect brain activity] and risk factors. It may be able to find information that was overlooked, says Kwan.

A study published in medical journal JAMA by Kwan’s team – using only dummy data – found version 1.0 of the model was twice as accurate at picking effective medicines as a typical physician.

Now, The Alfred hospital is testing these AI-informed prescriptions against a doctor’s analysis on humans, tracking their outcomes over a year. Kwan says neurologists were fairly concerned when the trial was described to them, but Lark is enthusiastic.

“I really see advantages to it. The doctor is like the gatekeeper,” Kwan says. “I just see it as getting the best outcome for the patients.”

‘In the past 10 years, one thing has become very clear: AI is already doing a better job than human doctors. There is very clear evidence on that.’

Zongyuan Ge, Monash University

At a NSW specialist clinic, an AI system sold by Artrya has screened more than 1500 patients for coronary heart disease, while Beamtree’s AI has monitored more than 28,000 patients at a NSW private hospital for clinical deterioration.

In South Australia, the state government has been under enormous pressure to fix ambulance ramping. One solution the government is trialling is an AI model that reads vital signs, blood tests and doctors notes, and spots patients who may be ready to go home.

“Currently, we’re relying on people synthesising lots of things. This [AI] does the work for you,” says Joshua Kovoor, one of the researchers who designed the South Australian model.

In a month-long test at a busy suburban hospital, the AI cut the average length of stay by 6 per cent – a saving worth $9.5 million a year to the hospital. Even better, patients discharged were less likely to need to come back.

The team has presented the AI model to hospitals in Victoria and NSW.

While many in the field are optimistic – even gung-ho – about the technology’s potential, there have been cautionary tales.

Epic Systems, one of the world’s largest healthcare software firms, rolled out multiple AI models to US hospitals, claiming to predict the risks of a range of conditions.

Loading

But these models are privately owned and not vetted by US regulators. When researchers finally tested them, they found the models either dramatically underperformed compared with existing non-AI warning systems or, worse, missed 67 per cent of patients with sepsis while generating so many false alarms, doctors were left with “alert fatigue”.

An Epic spokesman disputed that there were problems with the models and says they had not been rolled out in Australia.

Meanwhile, when researchers at the University of Sydney used a system based on ChatGPT to summarise de-identified medical records into easy-to-understand discharge instructions for patients leaving hospitals, one in five generated some kind of safety issue.

Most mistakes arose from the language model cutting out crucial information, such as following up with the patient’s GP a week after discharge. But in a far more serious 3 per cent of cases, ChatGPT told patients to take different medications to those originally prescribed.

Some experts say part of the problem with AI models is they are “black boxes” – we cannot effectively understand how they arrived at an answer, and the models struggle to explain it themselves.

“It is beyond human capabilities to understand what’s happening,” says Zongyuan Ge.

Stephen Bacchi, who co-developed the South Australian AI model, counters this argument: We don’t know how many things work, but we still use them (such as paracetamol).

Then there’s the “interoperability” problem – taking an AI model trained in one hospital and using it in another.

Loading

“But that’s not how AI works,” says Anton Van Der Vegt, a computer scientist at the University of Queensland’s Centre for Health Services Research. There is no guarantee, he says, that an AI model that works for one group of patients will work for another without re-training and re-validation.

Regardless, many are willing to take the risk.

“It’s a risk I have to take,” says Renata Bernarde. “It has really helped me a lot.”

Start the day with a summary of the day’s most important and interesting stories, analysis and insights. Sign up for our Morning Edition newsletter.

Most Viewed in National

Loading

Original URL: https://www.smh.com.au/national/chatgpt-will-see-you-now-how-ai-is-revolutionising-healthcare-20250214-p5lc5s.html