NewsBite

Court in legal dramas over AI hallucinations

When I asked Chat-GPT to help me get away with murder, my request was rejected for violating community guidelines. But when I wanted advice to get a driving fine overturned? Bingo.

AI is infiltrating Australian courtrooms, leaving judges, lawyers and parties with one question: How do I deal with this?
AI is infiltrating Australian courtrooms, leaving judges, lawyers and parties with one question: How do I deal with this?

I asked ChatGPT this week to help me get away with murder.

I wanted to see how helpful the tool would be. After all, artificial intelligence is slowly but surely creeping into Australian courtrooms. Surely it could provide me with some simple legal assistance on how to convince a judge I was innocent.

My request was rejected for violating community guidelines. Fair enough – I came across the same issue when I wanted help to avoid jail time for robbing a bank.

But when I requested advice on how to get a driving fine overturned? Bingo – a detailed list of tips, with monologues to recite in court and specific case citations to draw to the attention of a magistrate.

As an aside, it’s good to know where our chatbots draw the ethical line on those they will provide advice to (speedsters) and those they won’t (murderers and thieves).

However, there was a problem. The cases the chatbot listed did not exist.

R v McDonald (2017) NSWDC? Where a defendant apparently contested a speeding fine by arguing the speed limit signs were not clearly visible? Couldn’t find it.

Smith v The Queen (2016) HCA 25? Where a defendant successfully proved a fine was invalid because the details on the ticket were wrong? No trace.

It’s an issue, to say the least. Imagine if I were an actual practitioner using a chatbot to help with my job – as so many other professions do – and ended up citing a fake case in front of a judge?

Well, don’t imagine too hard, my friends. Unfortunately, this is what is occurring repeatedly in courtrooms across the globe.

As is often the case with issues like these, it started in the US.

In 2023, two New York lawyers in the well-known matter of Mata v Avianca were caught submitting a brief that contained made-up extracts and case citations. The lawyers, completely unaware that AI could hallucinate cases, failed to scrutinise the brief and check the citations were real.

The court dismissed the case, sanctioned the lawyers and fined them and their firm.

“In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis,” New York District Court judge Kevin Castel said at the time.

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

It wasn’t until February 2024 that we saw the first case of ChatGPT being used in an Australian matter.

The case came before ACT Supreme Court judge David Mossop when a 22-year-old, Majad Khan, was found guilty of paying for $63,000 worth of illegal vapes with cash that “was in fact simply paper” and then running off with the goods.

Justice Mossop found the scheme, pulled off with the help of two high school mates and another friend, was premeditated and “thoroughly dishonest”, ruling that the young man was particularly liable for his role in driving the getaway car.

To mitigate his sentence, Khan provided the judge with a series of character references from family members. Yet therein lay the problem: when Khan’s brother used ChatGPT to write the reference for his sibling, it praised his “strong aversion to disorder”.

“I have known Majad both personally and professionally for an extended period, and I am well‑acquainted with his unwavering commitment to his faith and community,” the reference read.

Mossop wasn’t buying it.

“One would expect in a reference written by his brother that the reference would say that the author was his brother and would explain his association with the offender by reference to that fact, rather than by having known him ‘personally and professionally for an extended period’,” his sentencing judgment read.

In October, a Victorian lawyer was referred to the legal services watchdog after he admitted to using artificial intelligence in a serious family court case.

The anonymous solicitor, who was acting for the husband in the matter, handed the court a single-page list of authorities. But once judge Amanda Humphreys and her associates began reading through it, no one could locate the cases identified.

“When the matter returned to court, I asked (the lawyer) if the list of authorities had been provided using artificial intelligence,” Humphreys said in an extempore judgment.

“He informed me the list had been prepared from LEAP, being a legal software package, as I understand it, used for legal practice management and other purposes. I asked if LEAP relies on artificial intelligence. He indicated that it does, answering ‘there is an artificial intelligence for LEAP’.”

Despite the solicitor offering an “unconditional apology” to the court for his actions and promising never to do it again, Humphreys referred the matter to the Victorian Legal Services Board.

“I consider it is in the public interest for the Victorian Legal Services Board and Commissioner to be aware of the professional conduct issues arising in this matter, given the increasing use of AI tools by legal practitioners in litigation more generally,” she said.

AI to impact jobs around the world ‘in different ways’

Australian National University associate professor of law Faith Gordon tells Inquirer hallucinations are not only happening in the courtroom but in the classroom as well.

“What AI hallucinations are is when the AI systems create or generate false information. That can be because there’s been an issue with the training of it,” she says. “It’s developing all the time, so it might be mixing accurate information with inaccurate information, and then what it’s pushing out or producing isn’t actually, on the whole, factually correct. That’s what’s been highlighted when judges have been calling out lawyers in relation to submissions because there may be some factual grain of information in what they’ve received, but there’s also inaccuracies.”

She says technology “does have its benefits and uses, but it needs to often be cross-checked with the actual materials”.

“It can potentially be a good starting point, but it shouldn’t be relied upon,” she says.

So what are the actual rules when it comes to using AI in court? The short answer: it varies.

Last month NSW Chief Justice Andrew Bell issued a sweeping direction effectively banning lawyers from using AI to create crucial evidence papers and requiring them to add a disclaimer declaring they have not used AI to develop the document.

NSW is just the latest jurisdiction to release a practice note of this kind, with Victoria and Queensland releasing similar guidelines earlier in 2024.

In Victoria, parties are required to tell one another of any assistance provided by AI when preparing a case. In Queensland, lawyers are similarly encouraged to disclose any AI involvement.

Bell also declared judges were not permitted to use AI to formulate reasons for judgments or to edit draft judgments.

Just this week Federal Court judge Melissa Perry – who has spoken regularly on the topic of AI and the courts – gave an address saying the use of this sort of technology by judicial officers had been classified by global authorities as “high risk”.

She says AI tools have the potential to impact judicial independence in four dangerous ways:

The opaque nature of many AI systems and their capacity to hallucinate.

The risk that the dataset on which the system has been trained may be outdated.

The risk that the AI system’s output may contravene Australian privacy law, Australian copyright law or contain discriminatory material.

And the risk of potential control, interference or surveillance from foreign actors through privately developed AI tools.

Further, she says the intense media attention on cases overseas where judge have used AI to form their judgments “illustrates, among other things, the very real capacity for such conduct to undermine public confidence in the judiciary, no matter how limited the use of AI in the particular judgment may have been”.

Perry also makes a fundamental point: when it comes to decision-making, Australians value tact.

“Robots have no capacity to understand or exercise fundamentally human qualities such as mercy, fairness and compassion, which have long informed courts and administrative decision-makers in the exercise of discretions,” she says.

“While the width of statutory discretions will vary according to context, discretions potentially afford humans the latitude to make judgments and reach decisions which reflect community, administrative and international values, and align with statutory objects, in the face of a wide or almost infinite variety of individual human circumstances.

Federal Court judge Melissa Perry.
Federal Court judge Melissa Perry.

“The capacity to exercise a discretion having regard to such values is also essential in many contexts to maintaining public confidence in decision-making.”

Former Federal Court chief justice James Allsop made a similar point in 2023 when dismissing the idea of artificial intelligence replacing humans in courtrooms, and saying Australians embraced the fairness and compassion of people, and would reject the rigid, mechanical conduct of a robot.

But while the judges appear to be approaching AI with strict and appropriate caution, the firms – as would be expected – are racing forward with innovations.

Both Ashurst and Gilbert + Tobin have run large, cash-incentivised competitions for staff to come up with ideas to integrate AI. Last year PwC and KPMG bought expensive chatbots that lawyers can use to ask questions about dense legal matters.

Cooper Grace Ward managing partner Charles Sweeney told The Australian’s Legal Partnerships Survey in September that the firm’s in-house IT developers had built a sandpit for exclusive use of its employees.

“While we have had success with a range of AI use cases for routine tasks, we remain cautious about giving legal tasks to AI until the technology has been better proven,” he said.

“In addition, there is very little demand from clients for us to be using generative AI at this point, which may be in part driven by privacy concerns which are high on the corporate agenda.

“We believe in the medium to long term the technology will have far greater impacts on the profession, particularly as it is trained on data sets that are more specific to Australian law.”

White & Case partner Joanne Draper says the firm has been using AI “for years” and has formed a partnership with AWS to use generative AI in a “secure fashion behind out White & Case firewall”.

“We have recently finished several pilot projects that have focused on incorporating generative AI tools like ChatGPT and other large language models to enhance the value we provide our clients and deliver insights relevant to our business,” she says.

Ellie Dudley
Ellie DudleyLegal Affairs Correspondent

Ellie Dudley is the legal affairs correspondent at The Australian covering courts, crime, and changes to the legal industry. She was previously a reporter on the NSW desk and, before that, one of the newspaper's cadets.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/inquirer/court-in-legal-dramas-over-ai-hallucinations/news-story/7bf9b8e00a70c5cea627b3056f2d2cfd