Litigation Support

 View Only

Project Validation for TAR: Part 2

By Brian Balistreri posted 04-26-2023 09:17

  

Please enjoy this blog posted on behalf of the author, Jonathan Kiang, Senior Electronic Discovery Consultant, Epiq.

At the end of a long and tedious document review, the last thing any attorney wants to be told is that more documents need to be reviewed.  Maybe the recent relevancy rates for documents selected based on the TAR model had reached near zero.  And perhaps the number of documents marked relevant is close to the estimated total number of relevant documents.  So why, after thousands or even millions of documents have already been reviewed, is more review for validation needed? 

TAR validation may seem like an afterthought, but it’s critical in culling situations in order to assess performance.  While responsiveness rates and relevant coding counts are helpful in evaluating a review, any approach that doesn’t look at culled set is only seeing half the picture.

In a production context, validation is a key step in showing defensibility of a cull.  The documents in a TAR review have some indicia of relevancy: they have survived through all prior non-relevancy filtering, whether those were search terms, custodian restrictions, date limitations, or some other criteria.  If you aren’t going to review them, you need to have some acceptable justification that the validation process can provide.

Even in a non-production context, validation remains important.  While you may not be dealing with an external adjudicator, the basic question is still the same: how can I be confident that I haven’t missed what I was trying to find?

Common Validation Mistakes to Avoid

1. Not Revalidating

Validation tests a particular workflow result at a specific point in time.  In the context of elusion testing, any change to the null set should trigger a new validation sample.  From a practical perspective, removing documents from the null set by reviewing them is not a concern, as if the prior results based on unreviewed null set documents were satisfactory, then more review would only be an improvement.  However, expanding the null set with new documents, such as from a rolling load to the TAR population, would invalidate any prior elusion samples, as the expanded null set contains documents that weren’t represented in any earlier sample.

Practically, if some set of TAR results were considered reasonable based on an elusion sample, small additions to the null set are unlikely to alter that.  So sometimes attorneys do not want to revalidate for the expanded population, and I have had to caution them that they lacked the statistical basis for validation, at least with respect to the null set additions.

2. Using Inappropriately Sized Samples

Validation requirements can vary significantly.  An internal investigation may only require just a rough assurance that the key records were found, while a regulatory production may have strictly defined statistical targets.  The sample sizes are definitely not one-size-fits-all, and some consideration should be given to matching the size with the requirements.

Proving that attorneys are an industrious lot, I have found requests for overly large samples to be much more common than requests for undersized samples.  The typical intent is to shrink the margins of error and reduce sampling uncertainty, which, while certainly laudable, also comes at the cost of larger sample sizes.  The rough rule of thumb is that you need to quadruple the sample size in order to halve the margin of error.  Very narrow margins of error require huge samples, so be conscious of whether your validation goals actually need that degree of precision.

The most memorable sampling request that I have seen was when an attorney asked for an elusion sample to achieve a 2% margin of error at 97% confidence.  While a 2% margin of error was probably overly narrow, it was still conventional.  The 97% confidence portion was literally unique for me, however, and led to a long discussion on what he was trying to accomplish and why those parameters were requested, with eventually more relaxed targets of 2.5% margin of error at 95% being used.


#LitigationSupportoreDiscovery


#PracticeManagementandPracticeSupport
#Firm
#TAR
0 comments
91 views

Permalink