In Search of Valid Data Searches
WEBINAR: On-demand webcast
Next-Generation Applications Require the Power and Performance of Next-Generation Workstations REGISTER >
In a litigation demand, the volume, variety and complexity of the data has changed, as well as the required knowledge and search expertise.
3. If It’s Not Documented, It Didn’t Happen
This rule is the corollary to the previous one and is essential to bear in mind if a challenge actually occurs. As anyone involved in litigation knows all too well, lawsuits can last a long time. Challenges to discovery processes in large cases can take place months or even years after the tasks were completed. For that reason, as well as the general unreliability of human memory, it is essential that a well-defined and consistent approach to documenting decision making and implementation of a repeatable process is adopted in every case. This is particularly important with regard to data searching because the process is often iterative, involving many attempts and revisions. Even where mistakes are made, the benefit of having documentation for the steps that were taken can go a long way toward reducing the temperature and avoiding severe sanctions even when things go wrong.
4. What makes a sample ample?
An essential element of conducting a valid search is the well-executed testing of results. Since sets of potentially responsive data sets are typically too large to look at each item, which is the whole reason searching data is done in the first place, using a statistically valid sample to test the results of a search is crucial to demonstrate the accuracy of the search. In-depth discussion of different types of sampling methods is beyond the scope of this article, but the critical point is that decisions made about what type of sample to use (e.g., random, stratified, uniform, systematic and various combinations) and what sample size to use must be well informed and well documented. Specific knowledge and expertise is crucial for this purpose whether that expertise is found within the organization or outside, the approach should be well-reasoned, documented and explainable.
5. Learning from your mistakes.
Surprisingly, the failure of many search processes occurs after errors have been detected rather than resulting from the lack of detection in the first place. Suppose you sample the results of your search, including documents that hit on search terms and those that didn’t. Now what? Is it enough to simply correct the specific errors that were found in the sample? Very likely not, since the purpose of a sample is to get a representation of the quality of the overall results—not to identify every error that exists. Yet many practitioners make this mistake. Detecting errors should be a point of departure for improving the process, not the end of it. Refine the search criteria, rerun searches, then sample and test again until you reach what you believe is a reasonable and defensible (again, not necessarily perfect) error rate (see Rule 2).
While these fundamental rules provide a foundation for conducting valid searches, the devil is in the details and the execution. The process of designing and executing valid searches needs to be undertaken with the seriousness and time commitment it deserves and with the required knowledge and expertise. Winging it in the world of high-stakes litigation can lead to a painful search for explanations of missteps and possibly new employment.
About the Author
Thomas Barnett is managing director and e-discovery practice leader at Stroz Friedberg LLC, which helps clients manage their digital risks in the areas of digital forensics, data breach and cybercrime response, electronic discovery, business intelligence and investigation, and security risk consulting.