NewsBite

Advertisement

This was published 1 year ago

A new breed of AI is changing healthcare. But it comes with a warning

By Angus Thomson

A computer-assisted needle misses its target, puncturing the spine. A diabetic patient goes rapidly downhill after a computer recommends an incorrect insulin dosage. An ultrasound fails to diagnose an obvious heart condition that is ultimately fatal.

These are just a few examples of incidents reported to the United States’ Food and Drug Administration involving health technology assisted by artificial intelligence (AI), and Australian researchers say they are an “early warning sign” of what could happen if regulators, hospitals and patients don’t take safety seriously in the rapidly evolving field.

Sydney-based company EMVision is using AI to develop a bedside device that can help diagnose stroke.

Sydney-based company EMVision is using AI to develop a bedside device that can help diagnose stroke.Credit: Dion Georgopoulos

“This is essentially showing us that when we’re putting in AI systems, we just need to be taking the safety of these systems really seriously,” Professor Farah Magrabi said.

Her team at the Australian Institute of Health Innovation at Macquarie University this month published a review of 266 safety events involving AI-assisted technology reported to the US watchdog. The article appeared in the Journal of the American Medical Informatics Association.

Most events (82 per cent) involved cases in which the AI was given incorrect or insufficient data required to work properly, while 11 per cent related to problems with the algorithm or the hardware of the device itself.

Only 16 per cent actually led to patients being harmed, but two-thirds were found to have the potential to cause harm, and 4 per cent were categorised as “near-miss events” in which users intervened.

Professor Farah Magrabi and Dr David Lyell from the Australian Institute of Health Innovation at Macquarie University.

Professor Farah Magrabi and Dr David Lyell from the Australian Institute of Health Innovation at Macquarie University.

Co-author Dr David Lyell said issues arose most commonly when users failed to enter the correct data, leading to an incorrect result, or misunderstood what the AI was actually telling them when it produced a result.

For example, one patient suffering a heart attack delayed medical care because an over-the-counter electrocardiogram device – which is not capable of detecting a heart attack – told them they had “normal sinus rhythm”.

Advertisement

“AI isn’t the answer; it’s part of a system that needs to support the provision of healthcare. And we do need to make sure that we have the systems in place that supports its effective use to promote healthcare for people,” Lyell said.

The researchers chose to analyse cases in the US, where the implementation of AI-enabled health devices is more advanced than in Australia. The US regulator has, to date, approved 521 artificial intelligence and machine learning-enabled medical devices, with 178 of those added in 2022.

In Australia, the Therapeutic Goods Administration (TGA) does not collect data about the number of approved devices in Australia that have AI or machine-learning components, but Magrabi said the regulator was taking the issue “very, very seriously”.

“AI isn’t the answer, it’s part of a system that needs to support the provision of health care.”

David Lyell, Australian Institute of Health Innovation, Macquarie University

AI has been used in healthcare devices for decades, but an explosion in data collection, computing power and advanced algorithms has opened up new frontiers, said David Hansen, chief executive of the CSIRO’s Australian E-Health Research Centre.

“[AI] really has the ability to increase the efficiency of Australia’s healthcare system [and] reduce errors,” he said. “There’s a lot it can do, we just want to make sure we’re doing it properly.”

Sydney-based medical device start-up EMVision is one Australian company taking advantage of these advancements to develop a portable device for diagnosing stroke without the need for an MRI.

In the development stage, the company is using an advanced algorithm and high-powered computers to simulate stroke in numerous places in the brain, building up a database of synthetic images similar to MRIs which are then compared with real-life MRI and CT results from hospital clinical trials at Royal Melbourne, Liverpool and Princess Alexandra hospitals.

Scott Kirkland and Forough Khandan of EmVision, which is developing a portable device for diagnosing stroke.

Scott Kirkland and Forough Khandan of EmVision, which is developing a portable device for diagnosing stroke.Credit: Dion Georgopoulos

“We wouldn’t be able to do what we’re doing today if we didn’t have the [high-powered computer] infrastructure for the simulation,” said head of product development Forough Khandan.

Loading

Co-founder Scott Kirkland said the intention was not to completely replace CT and MRI scans, but to diagnose stroke in the first hour when treatment is most effective. The bedside device, set to be launched in 2025, will use a “traffic light” system based on a probability algorithm to help determine what type of stroke might have occurred.

“It’s better for an algorithm to give an ‘I don’t know’ than an incorrect answer, and have the wrong treatment and or triage process followed,” Kirkland said.

Radiology is at the forefront of the rapid adoption of AI in healthcare, especially in breast cancer screening and analysis of chest X-rays.

“A couple of years ago, almost no radiologist would say they use it, now a fair percentage would say that they use it in their daily work,” said clinical radiologist and AI safety researcher Dr Lauren Oakden Rayner.

The Royal Australian and New Zealand College of Radiologists said last year that “current regulatory mechanisms have not evolved alongside recent breakthroughs in AI technology, and may no longer be fit for purpose”.

Loading

Rayner, a member of the college, said the technology had many potential benefits, but Australian regulators and clinicians needed to better understand the risks of fully autonomous systems before putting them into hospitals, clinics and homes.

“Humans are legally and morally responsible for decision-making, and it’s taking some of that out of human hands,” she said. “There’s no reason autonomous AI systems can’t exist ... but they obviously have to be tested very, very tightly.”

The Morning Edition newsletter is our guide to the day’s most important and interesting stories, analysis and insights. Sign up here.

Most Viewed in National

Loading

Original URL: https://www.theage.com.au/link/follow-20170101-p5d2tl