The Docket - August 2025 | Page 22

The Rise of Artificial Intelligence in Legal Writing

How to Spot AI-Generated Documents and Understand Hallucinated Citations

by Caroleen Brej, Esq., assisted by ChatGPT Bentley Goodrich Kison SCBA Board of Directors
As generative AI tools become increasingly integrated into legal research and drafting, attorneys must learn to recognize both their benefits and the limitations and risks involved. While AI can enhance efficiency and accessibility, it also introduces a new challenge: the possibility of fictitious case law, often referred to as“ hallucinated citations.” Understanding how to spot AI-generated legal documents and why these hallucinations occur is essential.
What Are AI Hallucinations in Legal Writing?
An AI“ hallucination” refers to the generation of factually inaccurate or entirely fabricated content that is presented as if it were real. In the context of legal writing, hallucinations often manifest as case citations to non-existent judicial opinions that appear authentic, complete with plausible party names, courts, docket numbers, and legal propositions.
These hallucinated cases can be difficult to spot at first glance. They may resemble familiar citation formats( e. g.,“ Smith v. Jones, 542 F. 3d 123, 126( 11th Cir. 2009)”), and their language often mirrors legitimate judicial reasoning. However, upon closer inspection, such cases may not exist in Westlaw, LexisNexis, PACER, or any official court database.
Why AI Hallucinates Legal Authority
AI models, such as ChatGPT, generate text based on patterns in the data they were trained on. They do not search real-time legal databases or verify citations. When prompted to provide support for a legal proposition, the AI“ predicts” what a case citation should look like based on statistical patterns, not factual legal research. This phenomenon arises from:
• No Real-Time Legal Database Access: Lack of access to real legal databases( like Westlaw or PACER). Their knowledge is static and limited to the data they were trained on( which may be outdated or incomplete).
• No Verification Mechanism: AI models don’ t cross-check their outputs with real-world sources. If a citation looks plausible, the model may generate it regardless of its accuracy
• Prompt Engineering Matters: If a user asks the model to“ include three supporting cases,” it will try to fulfill that request, whether real cases exist or not
AI tools are not inherently deceptive. They reflect the limits of their programming. When instructed to“ cite supporting case law,” the AI obliges, even if it must invent a citation to do so.
How to Spot an AI-Generated Legal Document
1. Unusual Language or Style Inconsistencies
AI-generated legal writing often mimics formality but lacks nuance. It may use overgeneralized phrases, repeat synonyms unnecessarily, or apply archaic or inconsistent terminology. For example, frequent repetition of words like“ It is well-settled law that …” without appropriate case context may signal AI authorship.
2. Overuse of Passive Voice or Generic Framing
While legal writing traditionally employs passive voice, AI tends to overuse it or include vague, formulaic constructions. If a document reads as though it were“ assembled” rather than authored with purpose, it could be AI-generated.
22 | THE DOCKET- AUGUST 2025