SA Employment Tribunal Justice Steven Dolphin refers lawyer for misconduct proceedings for citing three allegedly fake cases
The legal watchdog will investigate whether AI has infected a court case in a first-of-its-kind accusation against an SA lawyer.
An Adelaide lawyer misled a tribunal by citing three cases – none of which actually happened – in their closing submissions to try and win a WorkCover claim, a judge says.
In a first for the state’s legal profession, the unnamed lawyer now faces disciplinary action for their alleged misconduct, which may be tied to the use of Generative AI.
In a statement published online, SA Employment Tribunal Justice Steven Dolphin – who caught the lawyer in the alleged act – said such behaviour was intolerable and would be called out.
“The ability of SAET to rely upon the accuracy of submissions made by legal practitioners is fundamental to the administration of justice,” he said.
“Furthermore, SAET must protect its processes from abuse.
“The legal profession should be aware that SAET will not tolerate such conduct, and practitioners can expect similar referrals to the Legal Practitioners Conduct Commissioner in similar circumstances.”
In his statement, Justice Dolphin said lawyers “must not knowingly or recklessly mislead a court or tribunal”.
They must also, he said, “correct any misleading statement made to a court or tribunal”.
“The citing of a fictitious authority for a proposition of law is an anathema to these principles,” he said.
“The reason for the referral was the inclusion in written submissions of three fake cases which, it was argued, were SAET authorities for various issues under the Return to Work Act 2014.
“Such conduct is the first that I have been made aware of where a legal practitioner has misled SAET by citing non-existent authorities.”
Justice Dolphin said the possibility of AI models “hallucinating” and inventing fictitious information was well-known and already considered by courts around the country.
“Whilst part of that has been misrepresenting real authorities or citing real authorities that have no relevance to the case, more worryingly is the creation of fictitious authorities and fake ratio,” he said.
“Put simply, Gen-AI cannot currently be trusted to provide accurate information.
“Moreover, legal practitioners should be aware that reliance on Gen-AI may lead to the degradation of essential and necessary legal skills required to practice.”
Law Society of SA president Marissa Mackie said the “irresponsible use” of a “powerful tool” like AI “can have dire consequences”.
“Practitioners can legitimately use AI to benefit their practice and their clients, but it is crucial that they observe their strict professional and ethical obligations,” she said.
“For lawyers who may use AI to assist in the preparation of court documents, this means that they must carefully review these documents for accuracy.”
She said the Courts Administration Authority had recently surveyed lawyers about AI use, and that the Society was “heavily involved” in consultation to “develop guidelines for the profession”.
In September, a Victorian lawyer was stripped of his right to practice as a principal practitioner after being caught providing false, AI-generated citations to the Family Court.
That lawyer had used legal software to generate a citations list and presented it to Justice Amanda Humphreys, without first checking it for accuracy, in July 2024.
The lawyer offered an “unconditional apology” for their actions and promised to “take the lessons learned to heart”, asking they not be disciplined.
Justice Humphreys accepted the apology but said the incident was too important not to refer to the Victorian Legal Services Board, which imposed stringent supervision conditions upon the lawyer.
It was the first time an Australian lawyer had been disciplined over AI use.
