US law firms' use of generative artificial intelligence tools to coach lawyers, automate workflows and tackle complex tasks underscores the growing importance of the technology two years after the launch of OpenAI's groundbreaking ChatGPT.
The release of the chatbot was the primary time the general public was in a position to show the facility of generative AI tools and their ability to generate code, text or image responses to natural language prompts.
But since then, the legal industry has faced the same dilemma to other industries: How to capitalize on recent technology without cannibalizing existing jobs or compromising quality?
“We see tremendous potential for generative AI to make us higher lawyers, and we would like our employees to feel confident using the technology,” said Isabel Parker, Chief Innovation Officer at White & Case. “We even have an obligation to our customers to make sure that we use generative AI safely and responsibly.”
Companies now higher understand how technology could make legal work higher, faster and more cost effective.
In mid-2023, Crowell & Moring began using generative AI for “legal-related” matters that didn’t involve confidential information. The firm advisable its use for legal work “on a case-by-case basis when generative AI adds value, mitigates risks, and is consented by the client,” says Alma Asay, chief innovation and value officer at Crowell.
The company has progressively deployed AI to assist with core tasks resembling writing letters and summarizing witness statements, with a customer's consent. This reduced the time it took to summarize a client's intake notes to lower than half-hour, in comparison with two to 4 hours previously, says Asay.
Many firms have now tested the technology in relatively low-risk environments, allowing their customers to grow to be more accustomed to generative AI. They at the moment are fascinated about how they will improve their work processes much more and gain a competitive advantage.
“A summary of a document is useful, but not groundbreaking. . . It’s a price avoidance game that permits us to ask vendors higher questions or get around them,” says Thor Alden, deputy director of innovation at Dechert, which builds its own AI tools based on models from leading developers.
More vital, he says, are the custom tools that Dechert has developed “to capture data sets and integrate them into our workflows.” These tools are in a position to search massive data sets for specific information and reply to queries within the form of an experienced lawyer.
The next goal is to develop AI agents able to performing a spread of legal tasks – essentially acting as a further team member.
“AI means that you can view any document, in any context, on any day,” says Alden. The tool means that you can “search in ways you otherwise couldn’t, and it’d turn up a solution you hadn’t considered.”
Two of the most important barriers to adoption thus far are technological sophistication and customer caution – particularly in the case of giving generative AI tools access to sensitive data.
Various firms emphasize the importance of their employees' mastery of technology and see this as a competitive advantage within the industry. Crowell has implemented mandatory AI training for its staff, and 45 percent of the firm's lawyers have used the technology professionally. Similarly, Davis Wright Tremaine has developed an AI tool to show young lawyers to write down more effectively.
However, using generative AI for more complex legal issues brings with it additional complexity. Even one of the best chatbots today are susceptible to errors and inventions called hallucinations. These are serious concerns for a sector where privacy and accuracy are paramount.
“There are many the reason why a customer would say no to AI,” says Alden. “Sometimes it’s only a matter of warning in regards to the risks; Sometimes they only want you to ask permission (before using AI).”
At Crowell, lawyers are required to finish training that addresses topics resembling hallucinations, the usage of client data and their very own ethical responsibilities. The company emphasizes the constraints of AI tools and their potential, says Asay.
White & Case, meanwhile, has attempted to guard customer data by developing its own extensive language model internally. “It relies on quite a few legal sources but is privately licensed and deployed securely on the corporate's private network,” says Janet Sullivan, global director of practice technology.
This approach gives lawyers “the flexibleness to understand the complete potential of this technology” and offers the firm access to powerful, world-class open source models while protecting their data, she says.
The full potential of AI within the legal environment is way from being realized as firms and their customers adapt to a technology that remains to be too error-prone to be utilized in highly sensitive environments.
But it already reduces the time spent on tedious tasks like looking through data and summarizing documents. And further increases in efficiency might be expected within the short term.
“I actually have at all times believed that technology helps lawyers get back into the practice of law,” says Asay. “Decades ago, when the quantity of knowledge was still manageable, we didn’t need these tools. As the quantity of knowledge grows, technology helps us keep pace and ensures people can concentrate on its highest and best uses.”