NewsBite

Exclusive

Judge cautions against reliance on legal chatbots

A federal court judge has cautioned big law firms against relying on new technologies, as Australian companies move to make legal chatbots part of employee’s day-to-day work.

Federal Court Justice Melissa Perry has sternly warned legal practitioners to approach AI “with a high degree of caution”. Picture: Federal Court
Federal Court Justice Melissa Perry has sternly warned legal practitioners to approach AI “with a high degree of caution”. Picture: Federal Court

A federal court judge has sternly warned legal practitioners to approach AI “with a high degree of caution” and cautioned big law firms against relying on new technologies, as Australian companies move to make legal chatbots part of employees’ day-to-day work.

In a speech delivered at the Commonwealth Law Conference, Justice Melissa Perry also lambasted a Colombian judge who used ChatGPT to decide whether a minor diagnosed with autism was exempt from paying for medical treatment.

Justice Perry, who has spoken extensively about the dangers of generative AI, said the legal profession should be a long way from depending on technology for deep research or critical decision making.

“How much confidence should we have in these kinds of tools?” she said. “Clearly there is room for improvement and the answers should be approached with a high degree of caution, if not scepticism.”

Her comments come following the adoption of machine learning in the legal arm of consulting firm PwC, which recently entered into an exclusive Australian partnership with legal chatbot Harvey AI.

About 4000 lawyers across PwC’s global business will eventually gain access to the chatbot, which they can ask questions to generate a “base level of knowledge” on a topic, informing their legal advice.

KPMG has also gained access to its own, private version of ChatGPT, via a partnership with Microsoft, named KymChat. As KymChat is owned by KPMG, the company said it can perform tasks and answer questions without client data leaving the organisation.

Ashurst, Lander & Rogers and Gilbert + Tobin have all backed the adoption of AI and machine learning to varying extents.

But while Justice Perry acknowledged there could be some benefits in using the technology, she cautioned against a “well-documented risk of bias” when people relied on AI.

“It is important that caution is exercised before using automated and machine learning processes in decisions of an evaluative or discretionary nature, even if the machine is employed for only part of a decision,” she said.

“An implicit assumption is made to the superiority of machines to assemble accurate information and to reach more accurate conclusions.”

Justice Perry took aim at a Colombian judge who recently admitted he used ChatGPT to decide whether an autistic child’s insurance would cover the cost of their medical bills.

Legal documents showed Juan Manuel Padilla, a judge in the Caribbean city of Cartagena, asked the AI bot the question: “Is an autistic minor exonerated from paying fees for their therapies?”

A screen displays the logo of ChatGPT, the conversational artificial intelligence software. Picture: Lionel Bonaventure/AFP
A screen displays the logo of ChatGPT, the conversational artificial intelligence software. Picture: Lionel Bonaventure/AFP

ChatGPT’s response corresponded with the judge’s final decision: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”

Justice Perry said “mercy, compassion, equality and fairness” were key pillars upholding the judicial system that could not be replicated with technology.

“No robot yet has been created with a conscience, let alone the capacity for sentience and independent thought,” she said.

She said while the judge defended his use of the technology by using it to “facilitate the drafting of texts” and not as the sole issuer of a decision, she said “terrible” consequences could occur from this type of practice.

“One risk in using information from such tools or systems in the decision-making process is of a failure by the human decision-maker to bring a properly ­independent mind to bear on the issues and interrogate the data provided by such technologies,” she said.

“Those risks may potentially have terrible or unforeseen consequences, depending upon the context.”

Ellie Dudley
Ellie DudleyLegal Affairs Correspondent

Ellie Dudley is the legal affairs correspondent at The Australian covering courts, crime, and changes to the legal industry. She was previously a reporter on the NSW desk and, before that, one of the newspaper's cadets.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/judge-cautions-against-reliance-on-legal-chatbots/news-story/b53800566f21e47506d0362482cc4db4