Skip to the main content.
What Size Law Firm Are You?

We've crafted solutions tailored to your firm

Insurance Glossary

The world of insurance for law firms can be confusing, and difficult to navigate. We've created this glossary because these common insurance terms should be easy to understand.

← Blog Home

Insurance Coverage Issues for Lawyers in the Era of Generative AI

3 min read

Insurance Coverage Issues for Lawyers in the Era of Generative AI

I am one who can admit when I’m wrong. A few years ago, I was convinced the Metaverse was going to be a big deal, eventually even significantly impacting how lawyers would interact with their clientele. Oops. Got that one wrong. Well, at least for the time being. However, one thing I did get right was realizing how significant generative AI would become, how fast its adoption would be, and how much of its potential yet remains to be developed.

When it comes to generative AI the future is here and lawyers in firms of every shape and size are finding ever more creative ways to tap into its potential. Like me, these lawyers see the benefits of these technologies. Of course, as with all things tech related, lawyers also have an ethical obligation to explore and understand the associated risks of any tech deployed in a law practice. I suspect it’s because of what I do for a living, but as I see it, in order to fully understand the associated risks of deploying generative AI, one mustn’t forget to look into the associated insurance coverage implications. Thus, the following are a few key considerations lawyers should keep in mind when integrating generative AI into their practice.

  • Malpractice Involving AI Output

At the time of this writing, lawyers’ professional liability (LPL) policies typically do not exclude coverage for claims alleging negligence arising as a result of the use of generative AI. That said, coverage may depend on whether the conduct at issue meets the policy’s definition of “professional services.” Don’t assume that it always will. Currently, a well-known risk with generative AI is the hallucination problem. What if an AI tool produces a fake, incorrect, or misleading response and a lawyer relies on the accuracy of the output? Yes, a negligence claim might follow, but would it be a covered claim? The answer could be no.

 

If this lawyer is unable to demonstrate that she exercised reasonable care and due diligence with her use of the AI tool, then an insurer could argue that no professional service was ever provided because the lawyer simply chose to blindly rely on third-party technology. No professional service means no coverage; and unfortunately, the coverage analysis doesn’t stop there. If the subject lawyer did make a deliberate decision to blindly accept the output as accurate, this act might also trigger a policy’s intentional act exclusion.

 

A risk management takeaway is lawyers must always accept accountability and responsibility for all AI-generated output by validating the accuracy of outputs. Understand that a lawyer’s duties of competence and diligence can never be delegated to a machine. It’s as simple as that.

  • AI Interfacing with Clients or the General Public

If a law firm markets AI-generated content or tools (e.g., an online chatbot or a DIY legal form generator) directly to clients or the public and a malpractice claim arose out of that service, would this be a covered claim? Here again, depending upon the specifics of the situation and the jurisdiction in which the alleged negligence occurred, the answer could well be no for two reasons. First, if firm lawyers allowed the AI to make critical legal judgements without attorney oversight, this could be viewed as the unauthorized practice of law, and LPL policies typically exclude coverage for the unauthorized practice of law. Second, the lack of attorney oversight also implies that no professional services were rendered by an attorney; and as you now know, no professional service means no coverage. In short, over-reliance on an AI tool, or allowing it to make legal decisions without attorney oversight can create unintended consequences.

 

A risk management takeaway is to exercise caution when deploying AI tools that interact directly with the public and/or clients because a lawyer’s duty to supervise and review all work remains paramount. The fact that an AI Tool is a non-human member of the “staff” makes not one iota of a difference.

  • Confidentiality and Data Security

Feeding sensitive or confidential client information into a generative AI tool, especially one that is cloud-based and accessible to the public or not specifically designed for secure use by legal professionals, could result in a data breach or unauthorized access to this client information potentially giving rise to a claim. Again, would this be a covered claim? Under your malpractice policy, quite possibly no.

The reason is that most LPL policies have exclusions related to intentional acts or breaches of confidentiality that are not the result of a negligent act. That said, if your firm has purchased cyber liability insurance, coverage may be available under that policy depending upon the specific circumstances of any breach. Just be aware that here too an intentional acts exclusion could come into play.

 

A risk management takeaway is to only use generative AI platforms that come with strict data privacy assurances, allow users to opt out from data retention, and are in compliance with your jurisdiction’s data protection regulations. And yes, this does mean you need to read and understand the terms of service before using any generative AI platform.

 

I hope this information helps you in any generative AI decision-making process going forward because I do believe that generative AI offers incredible opportunities for our profession, opportunities that will enhance how legal services are delivered. While these three coverage concerns aren’t the only concerns with AI, they are ones I believe every lawyer should be most aware of. As for me, I guess I’m kind of relieved the whole metaverse thing seems to have lost its steam. When I start to think about all the coverage issues with that one, it makes my head hurt. Just sayin’.

Mark Bassingthwaighte, Esq., serves as Risk Manager at ALPS, a leading provider of insurance and risk management solutions for law firms. Since joining ALPS in 1998, Mark has worked with more than 1200 law firms nationwide, helping attorneys identify vulnerabilities, strengthen firm operations, and reduce professional liability risks. He has presented over 700 continuing legal education (CLE) seminars across the United States and written extensively on the topics of risk management, legal ethics, and cyber security. A trusted voice in the legal community, Mark is a member of the State Bar of Montana and the American Bar Association and holds a J.D. from Drake University Law School. His mission is to help attorneys build safer, more resilient practices in a rapidly evolving legal environment.

Why You Don’t Get It Both Ways with Of Counsel Relationships

3 min read

Why You Don’t Get It Both Ways with Of Counsel Relationships

Sometimes a call comes in that I feel compelled to write about and this is one of those times. The question seemed simple enough. Basically, the...

Read More
Can Lawyers Use Apps Like Venmo, Zelle, or PayPal?

3 min read

Can Lawyers Use Apps Like Venmo, Zelle, or PayPal?

As with so many tech decisions lawyers face, the answer is yes, as long as any associated ethical issues are identified and responsibly addressed....

Read More
ALPS In Brief – Episode 32: Making Your Mark

11 min read

ALPS In Brief – Episode 32: Making Your Mark

Trying to establish an advocate or mentor relationship as a woman in law? Andrea Canfield, president of the Anchorage Association of Women Lawyers...

Read More