Fair Work warns litigants against using ChatGPT
The workplace tribunal has dismissed a worker’s claim that relied on ChatGPT for legal advice as being ‘hopeless and unnecessarily wasting the resources of the commission’.
The Fair Work Commission has warned of the dangers of litigants relying on artificial intelligence for legal advice, dismissing a worker’s claim that relied on ChatGPT for advice as “hopeless” and a waste of the workplace tribunal’s resources.
Sydney man Branden Deysel sought an extension of time to file an unfair dismissal claim against his former employer, Electra Lift Co. Claims are supposed to be filed within 21 days of a dismissal but Deysel resigned from the company in October 2022, and filed his claim 919 days later.
Mr Deysel confirmed to the commission that he had used ChatGPT, which advised him his former employer had contravened workplace laws and he should file the claim.
Lawyers said using ChatGPT would be tempting for commission applicants, many of whom were self-represented litigants.
Michael Byrnes, partner at law firm Swaab, said the commission decision illustrated the risks and dangers of using ChatGPT, or other AI applications or models, for the preparation of commission applications and responses.
“While it made little difference in this case, it is only a matter of time before an otherwise meritorious application or response is undermined by the misuse of AI apps or models,” Mr Byrnes said.
In his decision this month, commission deputy president Tony Slevin said it was clear Mr Deysel had used ChatGPT, given the deficiencies in his application, which failed to address the matters required to make good his claims that the Fair Work Act had been contravened.
Mr Deysel’s application included an extract from advice given by Chat GPT which was that various employment and other statutory obligations had been contravened by his former employer, and he should pursue legal action.
“I can see no basis for this advice,” Mr Slevin said, before dismissing the claim. “Chat GPT also advised Mr Deysel to consult a legal professional or union representative to determine the appropriate course of action. He did not do so.
“(He) simply followed the suggestion made by Chat GPT and commenced the proceedings. The circumstances highlight the obvious danger of relying on artificial intelligence for legal advice.
“The result has been Mr Deysel commencing proceedings that are best described as hopeless and unnecessarily wasting the resources of the commission and the respondent in doing so.”
Mr Byrnes said the risks and dangers of using ChatGPT to file applications was particularly acute in a jurisdiction like the commission, where principles of fairness, which require value judgments, needed to be carefully considered.
“The work of the FWC does not lend itself to a formulaic approach, slavish to previous cases that may ostensibly have similar facts. It is a trite proposition that each case turns on its own circumstances,” he said.
Mr Byrnes said courts and tribunals frequently dealing with self-represented litigants needed to be acutely aware of the possibility ChatGPT was being used. “If it is, and it leads to a sound case being poorly argued or presented, that is the fault of the party seeking to rely on the technology.
“The FWC should give no concession or latitude to any party for having made that decision.
“AI may have a role to play in FWC proceedings but, at this stage in its development, it needs to be used judiciously and its output treated with a healthy degree of scepticism.”

To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout