HCBA Lawyer Magazine No. 35, Issue 4 | Page 56

LeveragIngLLmS : BaLanCIngInnovatIonwIthCautIon
Technology Section Chairs : ­Jason­Pill­ – Phelps­Dunbar­ & ­Kurt­Sanger­ – Buchanan­Ingersoll­ & ­Rooney
Balanceinnovationwith cautiontoenhance legalpracticeandstay aheadintheevolving legallandscape .
In the ever-evolving realm of legal technology , large language models ( LLMs ) have emerged as both invaluable allies and the boogeyman setting a trap for the unwary . These advanced tools promise to streamline drafting , enhance client communications , and boost overall efficiency for legal practitioners . Yet , when misused , they can conjure up non-existent cases or produce perplexing pleadings . To truly harness the power of LLMs , lawyers must wield them with a blend of caution and expertise , ensuring these tools enhance rather than replace their legal acumen . Some firms are now embracing internal LLMs and customizing the user experience . At Phelps dunbar , for example , we have “ PhelpsgPT ,” which is fine-tuned to address the specific nuances and concerns of the legal industry , offering risk scores to flag potential issues . For instance , while it advises caution when drafting substantive pleadings , it more comfortably green-lights a request like “ prepare a draft article about the use of LLM in the law .” This tailored approach ensures that AI serves as a helpful ally , while allowing firms to build in certain guardrails for their users .
Whether or not you have access to a specialized model , LLMs can be a helpful ally if you understand their capabilities and limitations .
They excel at generating humanlike text , making them perfect for drafting , summarizing , and brainstorming legal research or motions . However , they can also “ hallucinate ” facts and misstate legal principles . Independent verification is an absolute must . Under no circumstances should LLM outputs be passed along without thorough review and confirmation . one great approach is to provide your prompt and ask the model about its assumptions , directing it to ask clarifying questions . This can help ensure you ’ re on the same page before moving forward with potentially inaccurate results .
We all know that data security and confidentiality are nonnegotiable in the legal field . While public LLMs are great for generating high-level ideas , you should never share confidential information or risk disclosure . Instead , use placeholders and fill in identifying information offline . And be certain you ensure your platforms comply with industry standards for data protection . When in doubt , leave it out !
LLMs can also help refine your writing . They can provide templates , language tweaks , and tone adjustments . Instead of asking “ draft an email ” consider “ prepare a persuasive email to a savvy client from a senior associate , with a casual but professional tone .” They can even help you spot bad writing habits , like overusing the passive voice or run-on sentences ( a personal nemesis of mine ). Think about asking to make the sample more persuasive , friendly , aggressive , or informational .
Ethical considerations are also central when using AI in legal practice . Adhere to law firm policies and maintain transparency with clients about AI usage ( if you don ’ t have a policy yet , you should !). Ensure that AI-generated advice supplements , not substitutes , professional legal judgment . Be mindful of potential biases in AI systems , and work to mitigate their impact on legal outcomes .
In conclusion , the rise of LLMs in legal practice is unavoidable — and exciting ! Those who embrace and master these tools will gain a competitive edge , while those who ignore them risk being left behind . Successfully integrating AI requires balancing curiosity with caution , ensuring adherence to the principles of accuracy , confidentiality , and clientcentered service . n
Author : Caroline Catchpole Spradlin – Phelps Dunbar LLP
5 4 M A r - A P r 2 0 2 5 | H C B A L A W Y E r