Medical chatbots spark concern over healthcare hacking
Would you entrust your medical secrets to a bot? Fears of misdiagnosis and hacking into medical records have been raised by doctors’ growing use of AI.
Privacy Commissioner Carly Kind has warned doctors against using chatbots to eavesdrop on patient conversations without permission, as more medicos embrace artificial intelligence to help diagnose disease and write their correspondence.
Amid concerns over potential misdiagnosis, litigation and even hacking of patient secrets, Ms Kind said medical clinics cannot force patients to agree to the use of AI scribes to monitor and analyse conversations with their doctors.
“We’ve been looking at these AI scribes that are used in doctors’ surgeries,’’ she told The Australian. “They can’t force you to do anything when it comes to your medical information.
“That’s sensitive information under the Privacy Act, so anyone who wants to collect it or use it needs your consent.
“You cannot be required to give your consent – it has to be voluntary, informed, freely given, and specific.
“You absolutely have the ability, as a user of a GP service who wants to use one of these AI scribes, to withhold consent.’’
A growing number of medical clinics – including psychologists – are asking patients to consent to the use of AI to transcribe conversations and then write case notes, care plans and referral letters.
Often consent is verbal, or ticking a box online, without any explanation of how the data will be used, stored or shared.
Medical scribe Heidi claims on its website that it can detect “small talk’’ between patients and medicos. “Psychologists use Heidi to increase engagement, restore eye contact, and offer warmer mental healthcare,’’ it states.
“Heidi intelligently sifts through your therapy sessions, separating meaningful clinical insights from small talk and irrelevant details.’’
The Heidi AI – which simultaneously transcribes conversations in real time – stores the notes and transcripts but not the audio recordings.
Consumers Health Forum chief executive Elizabeth Deveny said she had heard of patients who had trouble correcting AI-generated errors in their medical records. She called for better controls and transparency over how AI is used in medicine.
“We don’t have national regulations or policy about this,’’ she said. “Where will my data go, and who can access it?’’
Dr Deveny said the software companies that sold AI scribes should be banned from onselling patient data to third parties.
“Consumers would not expect when they talk to their doctor or nurse practitioner that the information would ever be accessible to anyone else,’’ she said.
“People assume that space, when you’re having a consultation with your healthcare professional, is sacred – a little similar to the confessional, where you’ll say what you need to say and it won’t go any further unless you agree that information could be sent to someone for a referral.
“What if someone hacks into the database of a digital scribe, and that information becomes available? That’s only a matter of time.’’
The Australian Medical Association’s chair of public health, Michael Bonning, said AI scribes were being used by one in every four or five Australian doctors. He said most of the AI tools did not send data offshore, or store it long-term, and AI scribes saved time by summarising consultations that might cover five or six medical issues for each patient.
“Many doctors did not come through medicine as great typists, so the ability to add meaningful clinical details often takes a long time if you have to type it out,’’ he said.
Dr Bonning said there was “no direct ability’’ for insurance companies to “reach into your medical notes’’ but he could not guarantee that the full transcripts of conversations would not be subpoenaed by insurance companies or parties in domestic violence or divorce disputes.
“You’d have to speak to the companies who run those scribes, I don’t know exactly how long … they keep that information,’’ he said. “Australia needs to have a regulatory framework that provides transparency around AI application, so that doctors and the community can understand what they’re using and how it’s being used.’’
The Royal Australian College of General Practitioners has warned doctors to check AI scribes’ notes for accuracy, “as it can produce errors and inconsistencies’’.
In advice to GPs, it cautions against the “potential for GPs to become over-reliant on their use and pay less attention to crucial details or forgo the vital process of checking the output generated by the AI scribe, resulting in errors that could affect patient safety’’.
The RACGP says software vendors could collect data to train AI models, or on-sell patient data to a third party.
Avant, which provides medical legal advice and insurance, has warned doctors they are liable for any AI-generated errors.
“Inaccurate notes can contribute to treatment issues, including misdiagnosis,’’ it states in advice to doctors. “It will not be a defence to say an error was due to the use of an AI scribe.
“It is always your responsibility to review and check notes for accuracy before including them in a patient’s medical record.’’
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout