NewsBite

ChatGPT proves an unreliable witness

A lawyer who used ChatGPT to carry out research has had to apologise to a judge after compiling a brief full of case law the bot had supplied. The cases seemed relevant but, unfortunately, all were made up.

A lawyer who used the ChatGPT bot to carry out research has had to apologise to a judge after compiling a brief full of case law that the bot had supplied. The cases seemed relevant but, unfortunately, all were made up.
A lawyer who used the ChatGPT bot to carry out research has had to apologise to a judge after compiling a brief full of case law that the bot had supplied. The cases seemed relevant but, unfortunately, all were made up.

When the ChatGPT bot was launched last year, law professors warned it could soon take over large parts of the legal profession and start drafting briefs.

Now a lawyer who used it to carry out research has had to apologise to a judge after compiling a brief full of case law that the bot had supplied. The cases seemed relevant but, unfortunately, all were made up. The lawyer, Steven Schwartz, even asked the bot if they were real. “Yes,” it said, according to a transcript given by way of explanation.

Schwartz had been hired by Roberto Mata, who alleged he had suffered “crippling” injuries on board an airliner in 2019 when a metal trolley struck his knee. Schwartz “consulted ChatGPT in order to supplement the legal research”, he said in an affidavit.

The bot supplied several cases that looked relevant, including Varghese v China Southern Airlines Co Ltd, from 2019, before the US Court of Appeals for the Eleventh Circuit. Lawyers for the airline complained that they could not find the cited cases. Schwartz’s team submitted eight further documents detailing lawsuits against airlines.

Judge P Kevin Castel, in New York, examined them. “Six of the submitted documents appear to be bogus decisions with bogus quotes and bogus citations,” he said.

He contacted a clerk for the Eleventh Circuit, who said no one called Varghese had appeared in the past decade, and that the reference number for the case referred to another involving a man fighting extradition.

Schwartz submitted his conversation with the chatbot, in which it was clear that he harboured doubts about his robotic assistant. “Is Varghese a real case,” he asked the bot. “Yes,” it replied.

ChatGPT Founder: Government agencies should oversee AI companies

“What is your source,” he asked. The bot said that “upon double-checking, I found that the case Varghese v South China Airlines . . . does indeed exist.”

“Are the other cases you provided fake,” the lawyer continued.

“No, the other cases I provided are real and can be found in reputable legal databases,” it said.

They were not. Judge Castel has ordered Schwartz to appear before him on June 8 to explain why he should not be sanctioned for violations including “citation of non-existent cases”.

The Times

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/world/the-times/chatgpt-proves-an-unreliable-witness/news-story/3bf5c32aed64a0fb5668f4a31e2ad6da