But will AI end up being more helpful or a hindrance? For most law firms the answer is still unknown.
“Many law firm Chief Innovation Officers have been very, very busy trying to understand and interpret how AI is going to impact their firms and to what extent it can be integrated into the practice of law,” says Joseph McSpadden, Senior Vice President–Americas at Williams Lea. “From the feedback I’ve picked up, the way AI is used is probably going to take some time to evolve.”
SAFE BEGINNINGS
The challenge, of course, is to enjoy the productivity enhancements available from AI without allowing the new technology’s missteps to endanger the firm. Achieving that delicate balance means carefully selecting the tasks entrusted to automated systems.
“It’s recommended that legal practitioners only use generative AI for specific, isolated, repetitive time-consuming tasks that are low in risk,” says Corey Garver, head of IT services at Meritas, the global network of law firms.
He suggests the following:
Marketing and communications: Think of how automating first drafts of things like websites, biographies and social media posts would save time. “This kind of marketing content is necessary to get thought leadership out there. But it can be time-consuming, especially for lawyers, so that makes ChatGPT very attractive and very helpful from an efficiency standpoint.”
Legal document creation and automation: Generative AI can be a great starting point for habitual, time-consuming tasks, such as nondisclosure and confidentiality agreements and basic contracts, notes Garver. “We are seeing a lot of tools out there for contract review that leverage generative AI to speed up the tedious process of redlining data analysis, and manually reviewing and comparing contracts.
Just remember — they can create drafts only. It’s still on staff to check the documents for accuracy.
Discovery: Garver says that many litigation practices have been experimenting with generative AI, noting that the automation is getting more sophisticated now. “While many firms are hesitant to use it in this area, the fact is that ChatGPT can be very helpful as a starting point for brainstorming ideas, and for drafting opening and closing statements. It can also be used for scenario planning.”
AI can also be used to help understand what might be going through the minds of a particular set of jurors. “Different parts of the country and the world have different cultures and norms,” says Garver. “As a result, sometimes it's hard for any human to be 100% empathetic with jurors. So as a starting point for brainstorming, AI can provide insights on what jurors from a certain part of the country or region, or in this sort of economic environment, might think about a particular case.”
DANGER ZONES
The risk of costly errors arises when law firms try to rely on AI for higher-level work. ChatGPT, for example, has proven to be way too risky for unsupervised legal research.
“A lot still needs to be uncovered regarding the dangers of AI,” says McSpadden. “You hear a lot about hallucinations, where AI fabricates information or creates false sources. Those are obvious dangers that need to be better understood. Law firms need to be able to fence those off so they won’t be a significant issue or create significant exposure to a law firm.”
“It’s recommended that legal practitioners only use generative AI for specific, isolated, repetitive time-consuming tasks that are low in risk.”
With that in mind, Garver advises firms to be watchful for areas such as copyright laws. “When using generative AI, it’s critical to review the tool’s output, not just for accuracy, but also to ensure that there is no copyright infringement,” says Garver. “That’s a huge concern.”
He says be mindful of the American with Disabilities Act (ADA) as well. “Lawyers have a duty to avoid bias, and engaging in any conduct that discriminates on the basis of a number of different criteria,” says Garver. “As a result, lawyers must consider whether any artificial technologies they use are discriminatory or biased.”
Additionally, it might add another layer of procedure. Garver says that in the United States, some judges are requiring disclosures if AI is being used and how.
The bottom line is that human follow-up is critical when working with AI output. “Any information provided by ChatGPT must be cross-referenced with reliable sources,” says Garver.
DRAFTING SENSIBLE POLICIES
Aware of the dangers, some law firms have decided to avoid using AI at all within matters. And some clients have directed their firms to not use it. Given the pitfalls that accompany unrestrained use of AI, it would seem prudent for any law firm to draw up a suitable usage policy.
A workable law firm AI policy is something of a moving target. Some firms are prohibiting anyone from using any artificial intelligence without approval. Garver recommends the inclusion of these guidelines:
- Knowledge: Legal staff using it must have a thorough understanding of generative AI and how it works. “Solution providers can play a key role here, by educating the market on product features, trends, developments, best practices and risks,” says Garver.
- Disclosure: The American Bar Association requires lawyers to discuss with their clients the means by which legal services will be provided, meaning law firms should disclose any use of AI. Garver recommends such disclosures be included in a firm’s client engagement letter before services are provided, so there are no surprises later.
- Security: “Establish rules around what data can and cannot be used with generative AI,” says Garver. “Lawyers have an ethical obligation to prevent the inadvertent or unauthorized disclosure of confidential client information.”
Vendors must also be scrutinized: How do they go about protecting the law firm’s client information? And are they providing training to ensure that their technology is used in a manner that doesn't create an unreasonable risk to client confidentiality?
“Note that the terms of ChatGPT expressly state that any content shared with their program may be reviewed and is not private. I think a lot of lawyers haven’t taken the time to consider that. We suggest that law firms do not input client, firm or personal data into the AI tool. Just stick with public information or predefined, permissible use cases within the firm.”
Due diligence extends to the AI programs themselves. “Note that the terms of ChatGPT expressly state that any content shared with their program may be reviewed and is not private,” says Garver. “I think a lot of lawyers haven’t taken the time to consider that. We suggest that law firms do not input client, firm or personal data into the AI tool. Just stick with public information or predefined, permissible use cases within the firm.”
While the pitfalls of AI are many, the technology remains a promising tool for enhancing the legal industry’s productivity and profitability — when used responsibly. Indeed, many law firms have already been using automation for years. The new generation of AI will help them do everything much better, if they carefully plan their adoption of the technology to avoid the dangers.
“I think the road ahead is going to be very interesting,” says McSpadden. “Many of AI’s questionable areas are going to be flushed out a bit more in the next 18 to 24 months.”
TUNE IN: A CLOSER LOOK AT GENERATIVE AI
Check out our recent discussion on all things generative AI with Matthew Sullivan — Chief Operating Officer at Sullivan Law & Associates, Founder and Chief Executive Officer of consulting firm Unravel Legal and a member of ALA’s Professional Development Advisory Committee (PDAC). We talk about how ChatGPT and other forms of generative AI can improve efficiencies at firms as well as ways legal administrators can use this tech now. Give it a watch on our YouTube channel or download the audio version wherever you listen to your podcasts.