Please enjoy this blog posted on behalf of the co-authors, Trevor Burroughs, Sr. E-Discovery Analytics Adv,
Paul, Weiss, Rifkind, Wharton & Garrison LLP and Adam Strayer, E-Discovery Analysis & Review Technology Manager, Paul, Weiss, Rifkind, Wharton & Garrison LLP.
Technology Assisted Review (TAR) is a process in e-discovery that leverages advanced technology to complement human review. Given the sophistication inherent in some TAR tools, it can be important to leverage and apply the right expertise when using TAR, including as part of the procedure to validate its results.
Working with experts can speed up a TAR project while also reducing the risk that key validation statistics such as recall (the percentage of all responsive documents captured by TAR) are incorrectly calculated. Furthermore, as no one workflow or process will apply to all matters, experts in the field can develop a tailored approach that is specific to the needs of each case, ensuring that the review is thorough, defensible, and proportional to the matter.
One common TAR validation technique with which lawyers should be familiar is elusion testing. Rather than looking at documents TAR classified as responsive, this process examines the unreviewed document population TAR considered likely to be non-responsive. By sampling and reviewing documents from this group, an elusion test provides an estimate of the number of responsive documents TAR failed to score correctly (i.e., the estimated number of truly responsive documents TAR thought were non-responsive). Generally, this test is performed when the responsiveness rate of the review has fallen to almost 0% and it is clear the vast majority of responsive documents have already been identified.
When conducting an elusion test, one may choose to consider the TAR score of the lowest-ranked responsive document as a cut-off below which all documents are presumptively non-responsive. Assuming a sample of sufficient size is taken from the low-scoring population falling below that cut-off, the percentage of responsive documents found by reviewers in that sample can then be imputed to the rest of the low-ranking, likely non-responsive document population. This then allows users to calculate recall based on that estimated population and the number of documents coded as responsive during the review.
Though a statistically sound elusion test and the resulting recall score may be defensible on their own, some litigators may choose also to look beyond the unreviewed population and also take reviewer error into account when validating. For example, during validation one might conduct a standard elusion test as described above and also review random samples of documents coded as responsive and non-responsive by the review team. This sampling method, essentially another reviewer QC step, may provide even greater insight into the TAR project’s outputs and reduce the impact of reviewer error on the team’s final calculations.
As part of conducting discovery, lawyers may choose to implement such a TAR validation process that includes random sampling, quality checks, and statistical analysis. And while what is reasonable and proportional will vary for each matter, when e-discovery lawyers and technology professionals collaborate closely, they can help optimize TAR, promoting accuracy, defensibility, and overall success.
#Firm#LitigationSupportoreDiscovery#PracticeManagementandPracticeSupport#TAR