Large language models (LLMs) are both incredibly powerful and incredibly limited. On the other hand, Generative AI is a limited, nascent technology–the good news is that it is limited in certain predictable ways, and developing a familiarity with these limitations can help legal professionals navigate potential pitfalls when using the technology in their work.
Why develop Generative AI skills?
CLM
vendors dominated the event in 2023, and they did it in style. At the time, my only prior experience at a legal conference had been the Legal Innovation & Tech Fest in Sydney, and in comparison the glitzy booths sprawling across the exhibit hall of the Bellagio were a shock to my modest Kiwi sensibilities.
In 2024, CLM vendors were much more muted, eschewing elaborate sets and costumed mascots in favor of a more serious presentation style. Notably, there was an absence of the lavish parties that every CLM vendor seemed to be throwing last year.
It’s no secret that CLMs have been the primary legal tech beneficiary of VC dollars in recent years; the pullback of VC funding in CLM companies and its subsequent impact on marketing budgets were felt throughout the event. There were also indications that CLMs were losing their shine in other ways; Zach Abramowitz has previously questioned whether CLMs have reached Product Market Fit, and privately I heard similar comments both from other vendors and legal ops professionals.
These firms undoubtedly see generative AI as an opportunity to create new, more compelling experiences for users. Speaking of AI…
Mastering effective prompting techniques
1. Be clear and specific
Detailed prompts yield better results. This is almost so obvious as to go without saying, but Generative AI is not completely magic—you do need to give clear instructions.
.avif)
2. Use examples
Including examples within prompts can significantly improve AI outputs. This technique is known as “one-shot learning” (if providing a single example) or “few-shot learning” (if providing multiple examples).
.avif)
3. Chain of thougths
Encourage the AI to reason through its process step by step. Consider breaking down complex problems to their components parts and adding "let's think step by step" to prompts can enhance accuracy.
4. Role play
Framing the AI as an expert and assigning a specific role can improve the quality of outputs. For example, starting with "You are an excellent contract lawyer" can yield better results.
.avif)
Navigating common pitfalls
LLMs are prone to two categories of errors:
1. Legal reasoning errors
These commonly occur when the AI struggles with complex nested logic, such as exclusions to limitations of liability. Breaking down prompts into simpler, single-statement queries can help mitigate these issues.
2. Issues spotting errors
These happen when the relevant part of the document isn't processed by the AI. To avoid this, users should narrow down the scope of their queries by only passing relevant passages of the agreement into the AI at a time.
We also suggest avoiding the use of LLMs for research tasks due to the risk of hallucinations and outdated information, particularly in cases where you cannot easily verify the AI’s output.
Practical use cases for Generative AI
Min-Kyu highlighted three practical use cases for generative AI in contract review:
1. Tidy up review
AI can check for correct clause references and consistent use of defined terms, significantly reducing tedious manual work.
Prompt: Identify and list any instances where a clause reference points to a non-existent or incorrect clause. If a reference is found to be erroneous, specify both what the incorrect reference is and, based on the context, suggest which specific clause it should actually refer to.
2. Red flag reviews
AI can flag terms that are generally unfavorable to the user’s position, helping to streamline the review process.
Prompt: You are acting for Party X. Review the agreement and provide a list of the top 5 provisions that are unfavorable to Party X and a brief explanation for why the provision is unfavorable to Party X. An unfavorable provision could be provisions that are particularly onerous to Party X, not market standard, and/or exposes Party X to significant legal or commercial risk.
3. Consistency checks
AI can verify that agreements are consistent with predefined legal positions, providing a brief explanation for any discrepancies.
Prompt: You are acting for Party X. Review the agreement. For each of the requirements below, assess whether the agreement meets each of the requirements. Provide a brief explanation for why the agreement meets or does not meet the requirement.
1. Requirement 1
2. Requirement 2
3. Requirement 3
For more information on the above, check out our webinar replay here where we walk through these areas step by step.
Large language models (LLMs) are both incredibly powerful and incredibly limited. On the other hand, Generative AI is a limited, nascent technology–the good news is that it is limited in certain predictable ways, and developing a familiarity with these limitations can help legal professionals navigate potential pitfalls when using the technology in their work.
Why develop Generative AI skills?
CLM vendors dominated the event in 2023, and they did it in style. At the time, my only prior experience at a legal conference had been the Legal Innovation & Tech Fest in Sydney, and in comparison the glitzy booths sprawling across the exhibit hall of the Bellagio were a shock to my modest Kiwi sensibilities.
In 2024, CLM vendors were much more muted, eschewing elaborate sets and costumed mascots in favor of a more serious presentation style. Notably, there was an absence of the lavish parties that every CLM vendor seemed to be throwing last year.
It’s no secret that CLMs have been the primary legal tech beneficiary of VC dollars in recent years; the pullback of VC funding in CLM companies and its subsequent impact on marketing budgets were felt throughout the event. There were also indications that CLMs were losing their shine in other ways; Zach Abramowitz has previously questioned whether CLMs have reached Product Market Fit, and privately I heard similar comments both from other vendors and legal ops professionals.
These firms undoubtedly see generative AI as an opportunity to create new, more compelling experiences for users. Speaking of AI…
Mastering Effective Prompting Techniques
1. Be clear and specific
Detailed prompts yield better results. This is almost so obvious as to go without saying, but Generative AI is not completely magic—you do need to give clear instructions.
.avif)
2. Use examples
Including examples within prompts can significantly improve AI outputs. This technique is known as “one-shot learning” (if providing a single example) or “few-shot learning” (if providing multiple examples).
.avif)
3. Chain of thoughts
Encourage the AI to reason through its process step by step. Consider breaking down complex problems to their components parts and adding "let's think step by step" to prompts can enhance accuracy.
4. Role play
Framing the AI as an expert and assigning a specific role can improve the quality of outputs. For example, starting with "You are an excellent contract lawyer" can yield better results.
.avif)
Navigating Common Pitfalls
LLMs are prone to two categories of errors:
1. Legal reasoning errors
These commonly occur when the AI struggles with complex nested logic, such as exclusions to limitations of liability. Breaking down prompts into simpler, single-statement queries can help mitigate these issues.
2. Issue spotting errors
These happen when the relevant part of the document isn't processed by the AI. To avoid this, users should narrow down the scope of their queries by only passing relevant passages of the agreement into the AI at a time.
We also suggest avoiding the use of LLMs for research tasks due to the risk of hallucinations and outdated information, particularly in cases where you cannot easily verify the AI’s output.
Practical Use Cases for Generative AI
Min-Kyu highlighted three practical use cases for generative AI in contract review:
1. Tidy up review
AI can check for correct clause references and consistent use of defined terms, significantly reducing tedious manual work.
Prompt: Identify and list any instances where a clause reference points to a non-existent or incorrect clause. If a reference is found to be erroneous, specify both what the incorrect reference is and, based on the context, suggest which specific clause it should actually refer to.
2. Red flag review
AI can flag terms that are generally unfavorable to the user’s position, helping to streamline the review process.
Prompt: You are acting for Party X. Review the agreement and provide a list of the top 5 provisions that are unfavorable to Party X and a brief explanation for why the provision is unfavorable to Party X. An unfavorable provision could be provisions that are particularly onerous to Party X, not market standard, and/or exposes Party X to significant legal or commercial risk.
3. Consistency checks
AI can verify that agreements are consistent with predefined legal positions, providing a brief explanation for any discrepancies.
Prompt: You are acting for Party X. Review the agreement. For each of the requirements below, assess whether the agreement meets each of the requirements. Provide a brief explanation for why the agreement meets or does not meet the requirement.
1. Requirement 1
2. Requirement 2
3. Requirement 3
For more information on the above, check out our webinar replay here where we walk through these areas step by step.