Skip to the main content.
What Size Law Firm Are You?

We've crafted solutions tailored to your firm

Insurance Glossary

The world of insurance for law firms can be confusing, and difficult to navigate. We've created this glossary because these common insurance terms should be easy to understand.

← Blog Home

Why All Lawyers (Even Solos) Need to Take Deepfakes Seriously - and What You Can Do About Them

5 min read

Why All Lawyers (Even Solos) Need to Take Deepfakes Seriously - and What You Can Do About Them
Why All Lawyers (Even Solos) Need to Take Deepfakes Seriously - and What You Can Do About Them
9:52

Let’s start by defining the word deepfake. A deepfake is a hyper-realistic image, video, or audio forgery that was edited or generated using artificial intelligence. These synthetic media can convincingly mimic real people saying or doing things they never did; can portray events, people, or things that are not real; and are difficult if not impossible for humans to reliably distinguish from the real thing. Making matters worse, deepfake technology is rapidly advancing; is widely available to the masses; and with tools like Synthesia, DeepFaceLab, and Resemble AI, it’s easy for bad actors to fabricate content with minimal technical skill.

Why Should Lawyers Care?

Given the above, the implications are profound. The concerns that come immediately to mind include evidence tampering, social engineering scams, impersonation, reputational attacks, and malpractice exposure. For example:

  • Fabricated Evidence – What if an opponent or malicious third party were to produce doctored audio or video that purports to show a witness or a client making a statement or engaging in conduct that never occurred? Even if you can eventually prove it’s a fake, trying to recover from any short-term reputational, tactical, or judicial damage may prove insurmountable. And even more concerning is this. As deepfakes continue to proliferate (which they most surely will), will courts and jurors grow ever more skeptical of genuine video and audio thus eroding the evidentiary value of what used to be “trustworthy” evidence?

Fabricated evidence has been a problem for far longer than deepfakes have been around. Deepfakes just make the challenge of identifying fabricated evidence more complex and expensive.

  • Social Engineering Scams – What if a cybercriminal were to create a deepfake audio of you or a client in an attempt to commit wire fraud? Do you think the person at your firm who is to be the target of this scam would question the veracity of the instructions? Would you if a “client” were to call you and authorize a wire transfer? This example isn’t just hypothetical. Millions upon millions have been stolen worldwide as a result of scams just like this.
  • Impersonation – What if someone were to create a deepfake of your client in an attempt to settle a matter under more favorable terms, communicate with third parties, or negatively impact your attorney-client relationship? What if someone creates a deepfake of you in an attempt to turn your client against you, communicate with third parties, or communicate with opposing counsel? If you think something like this could never happen, think again, because it already has.
  • Reputational Attacks – What if an opposing party in a contested divorce were to create a deepfake of you making racist remarks, touching someone in an inappropriate way, or threatening someone and the video goes viral? Your reputation that took years to build could be gone in an instant. You and I both know that attacks on reputations have been going on for years. Deepfakes just make the chances of this type of attack succeeding a heck of a lot better.
  • Malpractice Exposure - What if you fail to recognize or challenge evidence that was not authentic? What if you rely on synthetic media without proper verification and it turns out the media is a deepfake? What if deepfakes are used in a disinformation or defamation campaign against a client and you fail to properly advise the client on how to respond? Missteps like these can all too easily lead to disciplinary complaints and malpractice claims.

What Can and Should You Do Now?

I do understand how tempting it is to hope that deepfakes will prove to be something you will never have to deal with in your practice. All I can say is when it comes to deepfakes, running your practice on a wing and a prayer isn’t going to get you very far in terms of responsibly managing this risk. You must be proactive. Here are a few ideas on where to start:

  • Education and Training – Start with the basics. Make sure everyone at your firm knows what a deepfake is and how easy they are to create. Train them to spot common red flags such as lip sync errors, weird eye blinking, mismatched reflections, unnatural pauses, inconsistent shadows, and the list goes on. Start to treat digital media with skepticism. Practice asking “Is this authentic,” particularly with evidence that looks to be too good or too damaging to be true. Always consider requiring proof before relying on it.
  • Conduct Mandatory Ongoing Social Engineering Awareness Training – Over time this training should cover all the various tactics utilized in social engineering attacks. Include current deepfake examples in order to demonstrate how these attacks “look and feel.” Note that mandatory means no exceptions; all lawyers and staff must participate. And if you happen to be a true solo with no staff, you should at least seek out and review relevant educational materials (e.g., you could subscribe to the KnowBe4.com Blog ).
  • Mandate the use of an out-of-band communication process to verify the legitimacy of every request to transfer funds, regardless of who the person making the request is and the communication channel the requestor is using. To clarify, an out-of-band communication is a method of challenge and response to the requestor of a transfer, payment, or delivery of funds using a communication method that is separate and distinct from the communication method the requestor originally used. For example, if the instructions come in the form of a video call, you might try to verify the veracity of the instructions by seeking to confirm them via a text message or phone call using a previously verified number.
  •  Maintain Strong Chain of Custody and Metadata Preservation - When you receive any digital media from clients or third parties, treat it as evidence from the get-go. Require and preserve the original files and metadata, use secure platforms for evidence exchange, and document chain of custody meticulously because courts will look to provenance and reliable chain of custody to assess authenticity.
  • Partner With Experts – While there are products and services that apply forensic analysis, metadata scrutiny, AI-based flagging, and anomaly detection to identify manipulated media, they are not foolproof. If you decide to use such tools, treat the result as suggestive, not conclusive. Given this, it’s important that you build relationships with credible digital forensics consultants who have experience in deepfake detection and litigation support and use them as your budget allows.
  • Include a Digital Evidence Integrity and Deepfake Risk Provision in Your Engagement Agreements – Clients may not realize how costly or complex it can be to prove what is real and what is not. Given that the authenticity of digital evidence is increasingly under threat, a provision such as this can help protect clients from surprise costs, prepare them for possible attacks on their credibility, and help ensure that their own evidence can withstand scrutiny. I had Microsoft’s Copilot draft the following sample provision:

Client Acknowledgment of Digital Manipulation Risks:

Client understands and acknowledges that advances in artificial intelligence and digital editing technologies—including but not limited to “deepfake” audio, video, and image generation—pose a growing risk to the authenticity and reliability of electronically stored information (ESI) and multimedia evidence. These technologies may be used to fabricate or alter content in ways that are difficult to detect without expert analysis.

 

Preservation and Authentication of Client Evidence:

 

To safeguard against potential challenges to the integrity of Client’s own evidence, Client agrees to cooperate in preserving original files, metadata, and chain-of-custody documentation for any digital materials relevant to the matter. Upon request, the Firm may recommend or engage forensic professionals to assist in authenticating Client-provided evidence. The cost of such services shall be borne by the Client unless otherwise agreed in writing.

 

Responding to Potentially Manipulated Evidence from Opposing Parties:

 

If the Firm reasonably suspects that evidence submitted by an opposing party has been digitally manipulated or generated using deepfake technologies, the Firm may advise Client on the feasibility and cost of challenging such evidence. This may include retaining forensic experts, conducting authenticity analyses, and filing appropriate motions. Client understands that these efforts may involve significant time and expense, which are not included in standard engagement fees.

 

Limitation of Firm Responsibility:

 

While the Firm will exercise reasonable diligence in evaluating the authenticity of evidence, it cannot guarantee the detection of all forms of manipulation or fabrication. The Firm’s role does not include forensic analysis unless expressly agreed upon in a separate writing.

 

A Final Thought

Deepfakes have the potential to undermine one of the core foundations of law, which is the ability to present trustworthy evidence that holds people accountable. As much as I do wish otherwise, deepfakes are not going to be a passing novelty. They are best viewed as an emerging and very real threat. That said, there is some good news. With prudent education and preparation paired with the development of responsive procedures and finding the right partners, you can competently manage the risks they bring to your practice. Just realize that the time to start is now.

Mark Bassingthwaighte, Esq., serves as Risk Manager at ALPS, a leading provider of insurance and risk management solutions for law firms. Since joining ALPS in 1998, Mark has worked with more than 1200 law firms nationwide, helping attorneys identify vulnerabilities, strengthen firm operations, and reduce professional liability risks. He has presented over 700 continuing legal education (CLE) seminars across the United States and written extensively on the topics of risk management, legal ethics, and cyber security. A trusted voice in the legal community, Mark is a member of the State Bar of Montana and the American Bar Association and holds a J.D. from Drake University Law School. His mission is to help attorneys build safer, more resilient practices in a rapidly evolving legal environment.

Why It’s Important To Be Able To Say

Why It’s Important To Be Able To Say "No"

Some people seem to struggle when it comes to saying no. Perhaps they view it as requiring them to be confrontational and confronting someone can be...

Read More
Why the Use of an Engagement Letter Should Never Be Considered Optional

Why the Use of an Engagement Letter Should Never Be Considered Optional

Engagement letters seem to be one of those documents most lawyers intellectually appreciate the value of but often underutilize. This letter simply...

Read More
Why Effective Client Communication Is all About Details and Documentation

6 min read

Why Effective Client Communication Is all About Details and Documentation

ABA MRPC Rule 1.4 Communication seems clear on its face. Attorneys are to keep clients reasonably informed about the status of their matters as well...

Read More