NewsBite

Exclusive

Veteran lawyer slapped down over bungled use of artificial intelligence in immigration case

A veteran lawyer is under investigation after he used artificial intelligence to do his legal research for a court hearing – only for it to be revealed that the cases “do not exist”.

A veteran lawyer is under investigation after he used artificial intelligence to do his legal research for a court hearing only for it to be revealed that it was all incorrect.

In a case that shows artificial intelligence is not quite ready to take over the world, the lawyer used ChatGPT to research past legal precedents to support his arguments in an immigration case before the Federal Circuit Court.

The artificial intelligence engine returned what purported to be 17 relevant past cases, which the lawyer then submitted to the judge and opposing lawyers without double checking that they were correct.

However, when the matter next appeared in court, the opposing lawyers told the court that the cases referenced by the lawyer in his submissions “do not exist”.

Making matters worse, the court was told the judge and opposing lawyers had already “spent a considerable amount of time attempting to locate the cases …”

A veteran lawyer working on a Federal Court Circuit case accidentally submitted fake evidence after using generative AI to do his research for him. Picture: Kirill KUDRYAVTSEV / AFP
A veteran lawyer working on a Federal Court Circuit case accidentally submitted fake evidence after using generative AI to do his research for him. Picture: Kirill KUDRYAVTSEV / AFP

The lawyer, whose identity has been suppressed, was referred to the Office of the Legal Services Commissioner for investigation last month.

The legal watchdog will now examine the court’s finding that he misled the court, the opposing lawyers, and delayed his client’s hearing when the court was forced to have a separate hearing over the inaccurate legal research.

When the error was discovered late last year, Judge Rania Skaros ordered the lawyer to appear in court to give a full explanation and any reason why he should not be referred for investigation.

The court was told that the lawyer’s “conduct created unnecessary additional work for the court” and the opposing lawyer spent a considerable amount of time searching for the non-existent cases.

Fronting the judge, the lawyer “expressed his sincere apology to the Court for his conduct, stating he understands that as a solicitor he had a duty to deliver legal services competently, diligently and promptly.”

He also explained that he decided to use ChatGPT due to his health issues that prevented him from sitting for extended periods and impacted his concentration levels.

The lawyer in question cannot be named due to his identity being suppressed.
The lawyer in question cannot be named due to his identity being suppressed.

“He accessed the site known as ChatGPT, inserted some words and the site prepared a summary of cases for him,” Judge Skaros told the court.

“He said the summary read well, so he incorporated the authorities and references into his submissions without checking the details.”

Judge Skaros told the lawyer that his conduct “falls short of the standard of competence and diligence that (his client) in the substantive proceedings was entitled to expect from his legal representative.”

“The Court expressed its concern about the (lawyer’s) conduct and his failure to check the accuracy of what had been filed with the Court,” Judge Skaros told the court. “ … A considerable amount of time had been spent by the court and my associates checking the citations and attempting to find the purported authorities.”

OpenAI CEO Sam Altman. The ChatGPT engine appears to have ‘hallucinated’ the cases referenced by the lawyer. Picture: Jason Redmond / AFP
OpenAI CEO Sam Altman. The ChatGPT engine appears to have ‘hallucinated’ the cases referenced by the lawyer. Picture: Jason Redmond / AFP

The lawyer told the judge he will double check his work in the future.

He also told the court he “has undertaken to further his knowledge and understanding of the risks of using generative AI tools.”

The lawyer told the court his conduct predated an instructive note put out by the NSW Supreme Court on the use of AI in cases.

The ten page practising note included a warning to lawyers that the use of artificial intelligence carried risks that included “inaccurate output”.

Judge Skaros told the court she referred the lawyer for investigation because “The misuse of generative AI is likely to be of increasing concern and that there is a public interest in the OLSC being made aware of such conduct as it arises.”

Do you have a story for The Daily Telegraph? Message 0481 056 618 or email tips@dailytelegraph.com.au

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.dailytelegraph.com.au/truecrimeaustralia/police-courts-nsw/veteran-lawyer-slapped-down-over-bungled-use-of-artificial-intelligence-in-immigration-case/news-story/d38a698e1f072e9f9397c043c0b19106