In an unprecedented ruling, a federal judge has imposed $5,000 fines on two lawyers and their law firm for submitting fabricated legal research in an aviation injury claim, with the blame placed on the use of the artificial intelligence tool ChatGPT.
Judge P. Kevin Castel deemed their actions as acting in bad faith but acknowledged their apologies and subsequent corrective measures, which led him to conclude that harsher sanctions were unnecessary to prevent future instances of AI tools generating fake legal history in arguments.
While recognizing the common use of technological advancements, the judge emphasized that attorneys have a responsibility to ensure the accuracy of their filings and act as gatekeepers.
In a written statement, Castel stated, “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
This recent incident follows a Texas judge's order earlier this month, requiring attorneys to confirm that they will not utilize ChatGPT or similar generative AI technology to draft legal briefs due to the tool's potential to invent facts.
Judge Castel criticized the lawyers and their firm, Levidow, Levidow & Oberman, P.C., for neglecting their responsibilities by submitting non-existent judicial opinions with fabricated quotes and citations generated by ChatGPT.
Even after the existence of these opinions was called into question, they continued to stand by them.
In response, the law firm stated that it would comply with Castel's order but disagreed with the finding of bad faith.
They apologized to the court and their client, asserting that their use of the AI tool resulted from an unprecedented situation where they mistakenly believed that technology could not fabricate cases.
The law firm is currently considering whether to appeal the ruling.
Judge Castel attributed the bad faith to the attorneys' failure to appropriately respond to the judge and their legal adversaries when it was discovered that six referenced legal cases supporting their written arguments did not actually exist.
He cited shifting explanations and contradictory statements from attorney Steven A. Schwartz, while attorney Peter LoDuca was accused of lying about being on vacation and being dishonest about the accuracy of submitted statements.
During a hearing, Schwartz revealed that he used the AI-powered chatbot to assist him in finding legal precedents for a client's case against Avianca, a Colombian airline, involving an injury sustained during a 2019 flight.
The chatbot, which generates essay-like responses, suggested several aviation-related cases that Schwartz had been unable to find through traditional methods. However, it was later discovered that some of the suggested cases were fabricated, featuring misidentified judges or nonexistent airlines.
One of the fraudulent decisions included the fictitious cases of Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines.
The judge noted that while some aspects of the fabricated decisions superficially resembled actual judicial decisions, other parts were nonsensical and contained gibberish.
In a separate written opinion, the judge dismissed the underlying aviation claim, citing the expiration of the statute of limitations.
Lawyers representing Schwartz and LoDuca have yet to respond to requests for comment.