This panel brought together leading voices in law, technology, and ediscovery practice: Tara Emory (Covington & Burling LLP), Elizabeth Gary (Morgan, Lewis & Bockius LLP), Ben Sexton (JND Legal Administration), and Cristin Traylor (Relativity). Their discussion explored how generative AI is reshaping legal document review, while underscoring that innovation must be paired with rigor, transparency, and defensibility.
The panel began by situating generative AI within the evolution of Technology-Assisted Review (TAR). Earlier TAR workflows relied on control sets, iterative sampling, and active learning to prioritize documents for review. Generative AI, while distinct, builds on these foundations. “Front-loaded learning is basically the same as TAR 1.0, except now the user is part of the algorithm,” one panelist noted. Instead of coding documents directly, practitioners now craft and refine prompts, a process that requires both legal and technical acumen.
One of the recurring themes was that effective use of AI hinges on prompt iteration and thoughtful sampling. As Sexton explained, a random sample may over-represent common issues, so reviewers must design samples that expose prompts to a diverse document set. Once results are compared to attorney review, discrepancies drive prompt refinements, which are then retested across the whole sample to avoid regression. Elizabeth Gary emphasized that drafting prompts often proves simpler than expected, provided legalese is stripped away in favor of plain language. New features, like Relativity’s Kickstarter tool, now enable discovery professionals to generate initial prompts and refine them before involving subject matter experts (SMEs). This accelerates collaboration and ensures SMEs are engaging with substantive issues earlier in the process.
The panel devoted considerable time to validation, noting that both qualitative and quantitative approaches are essential for effective analysis. Qualitative validation involves reviewing rationales, citations, and outputs for coherence, while quantitative validation measures recall, precision, and elusion rates. Tara Emory illustrated the concept with a vivid analogy: “If I tell you to catch three tuna and you come back with one, your recall is 33%. If you net the whole ocean, you’ll catch all three tuna — but also garbage. That’s recall versus precision.”
Notably, the panel stressed that validation results must be transparent and defensible. Courts and opposing parties are increasingly likely to scrutinize AI-assisted workflows, making clear documentation critical. As one participant observed, linear human review has never been validated at this level of rigor, and yet AI is often held to higher standards. Radical transparency, the panelists argued, is the best path to credibility.
Generative AI does not eliminate the need for attorney involvement; in fact, it increases it. “It’s always been an imperfect process. It will always be an imperfect process. The biggest opportunities are getting subject matter experts into the conversation early,” one panelist concluded. Unlike traditional review, AI-driven workflows push SMEs to define relevance, clarify nuances, and identify sensitive issues at the outset, ultimately improving outcomes.
The panelists also noted that AI can reduce the number of documents requiring manual review, freeing attorneys to focus on complex, high-value analysis. But this efficiency comes with responsibility: each step must be logged, explained, and defensible.
The session closed with practical advice for ILTA members looking to apply generative AI in their organizations:
- Keep a human in the loop. AI accelerates review, but oversight by attorneys and SMEs remains essential.
- Document everything. Detailed logs of sampling, prompt iterations, and validation are crucial for defensibility.
- Engage SMEs early. Early involvement prevents costly rework and improves both recall and precision.
- Be transparent. Share validation methods and results openly with stakeholders — and be prepared to explain them.
The panel struck a balance between excitement for generative AI’s potential and caution about its limitations. While AI can dramatically improve efficiency, it does not remove the burden of professional judgment, nor does it erase the need for defensible processes. As Traylor noted, “Transparency and cooperation are the way forward.” The session captured a pivotal moment for the industry: a shift from experimentation to thoughtful adoption, where success will be defined as much by process and validation as by technology itself.