The issues begin to arise when lawyers and paralegals attempt to remove themselves from the process completely. AI is a tool to aid human beings and give them new and faster ways of thinking— but some lawyers have decided to stop thinking altogether and allow AI to do all the work for them.
In the past 12 months, there has been no shortage of stories of lawyers getting busted using AI to write their briefs or motions, and it isn’ t difficult for judges to find out who is doing this. Even lawyers of high-profile clients like MyPillow CEO Mike Lindell were sanctioned when a judge caught them using AI to write their motions, and that is just one from a seemingly endless list of lawyers being sanctioned for being too lazy to double-check the information that their AI of choice is spitting out.
One of the most pressing dangers is the risk of inaccurate or misleading outputs. AI tools— especially those powered by large language models— can produce responses that sound authoritative but are factually incorrect, a phenomenon known as“ hallucination.” If a lawyer relies on such outputs without independent verification, it could result in flawed legal advice, missed precedent, or even sanctions from the court for submitting erroneous information. We are witnessing this outside of the practice of law, as well. For example, the“ Make America Healthy Again” report from this spring relied heavily on AI, and the result was a jumbled mess that ended up citing health reports that never actually existed to justify the wild and completely false assertions in the final report.
Confidentiality and data security are also major concerns that law firms have to take into consideration. Many AI tools, especially cloud-based ones, require uploading sensitive client data to third-party servers. This raises questions about compliance with privacy laws, attorney-client privilege, and data protection standards. If a data breach occurs or client information is improperly accessed or stored, the firm could face serious ethical and legal consequences, along with damage to its reputation.
There is no doubt that AI technology will eventually get to the point where these problems no longer exist. AI will be able to fully act as a lawyer or legal representative for human beings at some point, possibly even within our own lifetimes. But as it stands right now, clients need human beings who are capable of independent thinking, creativity, and compassion— AI has a long way to go before it can master, or even fake, those qualities.
As is often the case, there is more than just the“ Good” or“ Bad” sides to an issue— there is always a shady gray area, as well. The area where things get a little morally dubious while still being perfectly legal.
It is no secret that in our digital age, nothing we do is private. Our shopping habits, our driving routines, and even our medical records are regularly harvested and sold( legally) to companies all across the planet. This is generally viewed as a massive invasion of our privacy that we have all just come to accept as a part of modern life, whether we object to it or not. This is where AI can benefit law firms. Imagine having the ability to identify a client before they even know they need to start looking for lawyers. That is the reality in the near future that AI is going to give law firms, and it will be as simple as a few keystrokes into a prompt.
As I mentioned, our shopping histories, medications, driving habits— everything you can think of— are all available and regularly tracked by our smartphones, watches, and apps. If there is a product recall due to dangers to consumers, AI can quickly access this data, if the law firm has purchased it, to find out which consumers purchased these products and immediately put out a call, targeted email, or perhaps even just include them in the inevitable ad campaign that will result from the recall— all before the client even realizes that they need a lawyer.
This move could revolutionize client acquisition, but again, it does seem morally dubious. While this information is available for legal purchase, many clients may not look too kindly on a firm that has harvested their personal data for professional gain. But that’ s a gamble that plenty of law firms will be willing to take once technology gets to that point.
The Biggest Threat Of All
The greatest danger of AI— and the root cause of all of the problems with AI that have arisen in the legal profession and elsewhere— is that it is not and cannot become a replacement for human beings.
Critical thinking is a vital skill that many believe is becoming an extinct trait. If we become a species that relies on artificial intelligence to do our thinking for us, we no longer have the need for it, and evolution will take care of the rest. And we’ re already starting to see this happen. Artists have been the hardest hit so far. Why bother painting a masterpiece when AI can create a unique painting for your dining room in a matter of seconds? Why would a Hollywood studio continue to pay actors millions of dollars for a film when the technology is growing and evolving so rapidly that we soon won’ t have real movie stars anymore— it will be hyper-realistic computer images that are indistinguishable from human beings. Why pay screenwriters and songwriters when AI can knock out the job in the blink of an eye? These careers are currently at the epicenter of the debate over whether or not we should embrace AI technology, but they won’ t be the only ones threatened.
This is why we must regulate ourselves. We must understand that AI is a tool to help us, not an escape from thinking and doing. When used properly, it can help us grow as a species. But if we rely too much on it and destroy our critical thinking skills, those science fiction thrillers will start to look very prophetic.
30 The Trial Lawyer