Want to Be Sanctioned by the Court? Rely on Generative AI to Create Your Legal Arguments

By Amber E. Storr, Esq.

If you believe that generative AI can take the place of a knowledgeable lawyer, think twice. And then perhaps think again a third and fourth time.

Recently, a litigant in Missouri representing himself in an appeal of a judgment against him in an employment law case not only lost his appeal but was sanctioned $10,000 for submitting an AI generated brief containing fake cases to support his legally flawed arguments. The appellate court’s decision noted numerous errors and omissions in the brief filed by Jonathan Karlen that required dismissal of his case. However, his submission of a brief containing a vast majority of inaccurate and entirely fictitious case citations caused opposition to expend otherwise unnecessary resources trying to track down and research these phony cases, therefore resulting in monetary sanctions. Only 2 of the 24 cases cited in the brief were genuine.

This is not a unique occurrence. We all need to understand the very real risks of using generative AI to create legal arguments and briefs. These generative AI programs are designed to generate for the user whatever the program is asked to provide. That means that the program will make it up as necessary.  When the AI program creates an argument that is factually or legally incorrect, including when the program provides citations to legal authority as real when it is not, the phenomenon is called a “hallucination.” Generative AI programs have hallucinated, or created, nonexistent case citations and bogus quotations to support arguments contained in legal briefs. These programs have been known to draft completely fictitious written decisions to support these fake citations.

Such AI hallucinated creations can, at first, look very real on the surface, even to a lawyer’s eye. Some of these fake decisions are accredited to real courts and judges. The fake cases will sometimes cite additional fake legal authority, but sometimes cite real cases and statutes – albeit often misrepresenting the real authority as supporting a legal argument that the source does not support. While some of these AI generated briefs have raised immediate suspicions with knowledgeable lawyers and judges because the arguments and fake citations contradict well-established law, it is not until these made-up citations and decisions are searched for in reputable legal databases and digests does the truth become clear.

Representing oneself in court instead of hiring a knowledgeable lawyer has always been filled with peril. But relying on generative AI to create your legal documents and arguments becomes a veritable minefield of risks and costs. This is true even for lawyers who rely solely on generative AI to do their work for them. At least three lawyers have faced sanctions related to their misplaced reliance on generative AI to create a legal argument.

Generative AI can have many beneficial uses in many industries, including law, but it is merely a tool, not a lawyer. Like any tool, it must be utilized responsibly, with care and control. Open generative AI programs are simply not ready for litigation prime time. While AI is useful in certain limited ways, responsible law firms have established AI policies to protect client confidentiality, check output for accuracy under applicable law, and ensure that the work product quality is not compromised by its use.

Hurwitz Fine P.C. established such policies early on. We continue to monitor the development of generative AI products to ensure that any such use is applicable, advisable, ethical, and to our clients’ benefit, not detriment.

The takeaway lesson here: AI cannot replace a reputable law firm and lawyers with knowledge of the areas of law applicable to your needs.


Newsletter Sign Up