Back to blog
Articles

Defensibility of eDiscovery AI in Court

George Socha
September 2, 2021

15 min read

Check how Reveal can help your business.

Schedule demo

While the defensibility of AI generally has gotten increased attention of late, there has been less focus on the defensibility of AI specifically when used for eDiscovery and even less when it comes to the defensibility of using AI models to facilitate discovery in the US criminal justice system.

The first issue has gotten the most attention and is broader that the scope of today's post, which focuses on the use of AI in our legal system. The second question - whether the use of AI to help with eDiscovery is defensible - has been with us for at least a decade. While the question gets posed on a regular basis, the responses tend to be rather vague. The third question, about the defensibility of AI models, is just starting to get attention.

What Do We Mean by "Defensibility"?

The plain-language meaning of "defensible" is, according to Merriam-Webster, "capable of being defended". While that definition is straight-forward, it does not help us much here, where our focus is on judicial systems.

More useful is a framework enunciated in 2012 Sedona Conference article, "Defensible" By What Standard?, from United States Magistrate Judge Craig Shaffer. In that article, the judge described a "defensible e-discovery protocol" thus:

At the outset of litigation, and often even before a lawsuit commences, the goal is to implement a discovery plan that identifies, collects and produces relevant and responsive non-privileged materials from a larger universe of ESI using reliable methodologies that provide a quality result at costs that are reasonable and proportionate to the particular circumstances of the client and the litigation.

He went on to write, "Ultimately, a technology-assisted review process must comport with the requirements of the Federal Rules of Civil Procedure, be proportionate to the claims, defenses and circumstances of the particular case, and be reasonably transparent to the court and opposing parties."

As Judge Shaffer acknowledged, this is a post hoc standard - one applied only after a party has made critical decisions, likely has spent considerable time and money, and probably has completed much of not all of the process. The standard set forth above does not provide much guidance to those who are at the onset of the process.

Fortunately, the judge's analysis did not stop there. Rather, he went on to offer a standard for measuring defensibility. According to that standard:

  • An eDiscovery plan (and by extension, process) "is not to be held to a standard of perfection". It only needs to consist of "reasonable efforts [by a party] to identify and produce responsive, non-privileged material...." (citing FRCP 34, In re Delta/AirTran Baggage Fee Antitrust Litigation, F. Supp. 2d , 2012 WL 360509, at *13-14 (N.D. Ga. Feb. 3, 2012), and similar decisions.)
  • That, in turn, entails that the party "conduct a reasonable search for responsive documents". Reasonableness, in turn, is defined at least in part by what it is not, such as "halfhearted and ineffective efforts to identify and produce documents". (quoting Robinson v. City of Arkansas City, Kansas, 2012 WL 603576, at *4 (D. Kan. Feb. 24, 2012).)
  • Judge Shaffer continued, writing in essence that for a TAR process to be reasonable and defensible, it had to meet the FRCP requirements of proportionality. He set forth four criteria that needed to be met:
    • Functionality: The selected processed had to be functional: "[T]he proposed methodology should be commensurate with the quantity of potentially responsive ESI and all pertinent data types, repositories and custodians/users. Here, the focus should be on the 'fit' between the technology and the data collection to be searched".
    • Reasonableness: The selected search methodology had to be reasonable, "consider[ing] the 'fit' between the costs associated with the search methodology and the overall value of the litigation".
    • Reliability: The methodology had to "be demonstrably reliable in terms of recall and precision, or other appropriate metrics".
    • Understandability: The methodology had to "be readily understandable to multiple audiences (e.g., the client, opposing counsel and the court)".

Judge Shaffer's article was published in 2012. Three years later, the Federal Rules of Civil Procedure underwent their second major revision designed to address challenges raised by the discovery of electronically stored information. Nonetheless, the judge's four-part framework continues to be a solid mechanism for assessing defensibility.

What "AI" Are We Talking About?

This post is about the use of artificial intelligence to facilitate the eDiscovery process, primarily in the quest to find relevant data during the Review and Analysis stages, and about potential challenges to that use case, and hence the AI automation and assessment tools used to accomplish those steps.

This topic often gets examined narrowly, looking only at Technology Assisted Review, a form of supervised machine learning used, per the EDRM website, to classify documents, "in an effort to expedite the organization and prioritization of the document collection." Questions about the defensibility of TAR were the focus of Judge Shaffer's article.

While TAR is important and is used regularly by legal professionals to enhance review by aiding the decision-making process, other forms of AI technology also are deployed in eDiscovery in many ways, all potentially subject to challenge in court. Examples include:

AI models: Generally, an AI model is a software program or set of algorithms that has been trained on a datasets to perform specific tasks like recognizing certain patterns. Artificial intelligence models use decision-making algorithms to learn from the training and data and apply that learning to achieve specific pre-defined objectives, as well as specific legal issues. AI models can be used individually; they also can be layered together, further extending their value.

Anomaly detection: AI can be deployed to use natural language processing to analyze the use of language for negative or positive sentiment (which can indicate biases), communications at unusual times, or messages containing various forms of emotional content, a topic covered in The Exquisite eDiscovery Magic of Data Anomaly Detection.

Foreign language content: Some AI systems can detect foreign language content. Others go an additional important step and translate that content for you, sometimes allowing you to choose the source language (e.g., translate from Korean) and the destination language (e.g., translate into Spanish).

Image recognition and classification: Computer vision, a AI tool that enables computers to derive meaningful information from pictures, can be used to identify content in images and apply labels to those images. Those labels can be searched with all the same tools used to search other textual content. For more details, go to Image Recognition and Classification During Legal Review.

Do We Need Judicial Endorsement to Use AI in Discovery?

If questions posed at conferences and during webinars are any guide, there seems to be a widely held belief that AI (whether TAR or some other form) should only be used for eDiscovery if a judge has explicitly endorsed use of the technology in a court case.

Not so, as we can see from the case law, starting with one of the very first court decisions addressing the use of TAR. The Honorable Andrew J. Peck served for 23 years as a United States Magistrate Judge for the Southern District of New York and currently is senior counsel at DLA Piper. In 2012, he wrote a landmark decision in the employment class action matter Monique Da Silva Moore, et. al. v. Publicis Groupe & MSL Group. That opinion was the first judicial decision to "recognize[] that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases".

Judge Peck's decision has been widely misinterpreted. First, the judge did not order the parties to use TAR but instead approved the parties' joint agreement that defendants be permitted to use it. He stressed this point in the first footnote in his February 24, 2012 Opinion and Order:

To correct the many blogs about this case, initiated by a press release from plaintiffs' vendor— the Court did not order the parties to use predictive coding. The parties had agreed to defendants' use of it, but had disputes over the scope and implementation, which the Court ruled on, thus accepting the use of computer-assisted review in this lawsuit.

Second, the touchstone is not judicial endorsement but rather reasonableness. As Judge Peck has noted many times since issuance of that opinion, including during a March 2021 ABA panel discussion: "If you are the unlucky or lucky person who is the first to take [new information technology] before the court, just make sure to explain what you’re doing, what the technology is doing, and why the result … is reasonable."

Back to Defensibility

By now it should be evident that there is no simple answer to the question, "Is it defensible to use AI in [fill in the blank] case?" What, then, do we do?

Frustrating though it may be, each use needs to be considered in its own right. For that, we can return to Judge Shaffer's four-part framework of functionality, reasonableness, reliability, and understandability:

  • Functionality: The AI process you are considering using should be designed to meet defined needs. If you are trying to find additional documents similar to exemplars selected by you or opposing counsel, the technology you use should be capable of achieving that objective. If your goal is to find pictures showing mold on walls, you should use a tool that can accomplish that purpose.
  • Reasonableness: There should be a rational relationship between the costs of deploying the AI process you intend to use, and the value you hope to obtain from that use. An AI process that chokes on 1,000 documents should not be turned to when you have deal with a population of a million files. At the same time, to turn a popular phrase on its head, you should not have to bring a gun to a knife fight.
  • Reliability: You need to be prepared to demonstrate that the AI process you use works reasonably well and does some with reasonable consistency. (Yes, I know, "reasonable" is always and ever a squishy term but that is the world we lawyers so often have to work in.) You do not need to proof perfection, and reasonableness has no bright line. Nonetheless, most of time lawyers and judges - who generally are lay people when to it comes to the fine points of AI - can do a good job of assessing how well a process delivers results. And if they encounter questions of precision, recall or other means of measuring reliability, they can evaluate the results through sampling, testing, and various other strategies. (In general terms, precision looks at how many of the documents identified as responsive actually were responsive, while recall looks at how many of the responsive documents actually were identified.)
  • Understandability: In the end, for parties and the judiciary to accept the use of AI as defensible, they are going to have been able to understand that use well enough to pass judgment on it. Once again, there are few, if any simple answers or bright lines. Rather, this is the time for attorneys and their staff, service providers, consultants and expert witnesses to show their value by explaining processes, their use, and results that come of their use in clear terms that those unschooled in eDiscovery and AI can understand.

Whether the AI at issue is TAR deployed to "find more like this", software used to translate content from one language to another, or anomaly detection deployed to locate communications containing strong negative sentiments, the same defensibility framework can be applied.

If your organization is interested in learning more about how Reveal uses AI as an integral part of its AI-powered end-to-end document review platform, contact us to learn more.

Get exclusive AI & eDiscovery
insights in your inbox

I confirm that I have read Reveal’s Privacy Policy and agree with it.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.