Use cases
Industries
Products
Resources
Company
There has been much discussion of late about whether and to what extent it is ethical to use artificial intelligence (AI) for eDiscovery.
As I have read posts and articles on the topic and listen to what is said during conferences and webinars, I have had a hard time pinning down the concerns and their implications.
In this series of posts, I will try to make sense of the issues and figure out where we as an industry stand – and should stand – on these issues.
In this post, the first in the series, I will look at definitions of key terms we need to understand if we are to make sense of the ethical issues related to the use of AI in eDiscovery. In the next post, I will examine ethical frameworks that may provide guidance.
According to Britannica, ethics, “also called moral philosophy, [is] the discipline concerned with what is morally good and bad and morally right and wrong. The term is also applied to any system or theory of moral values or principles.”
Merriam-Webster draws a distinction: “Morals often describes one's particular values concerning what is right and what is wrong.… While ethics can refer broadly to moral principles, one often sees it applied to questions of correct behavior within a relatively narrow area of activity…”
Legal ethics is more focused, as noted by Cornell Law School’s Legal Information Institute. Also known as professional responsibility, it is the law governing lawyers:
Because of their role in society and their close involvement in the administration of law, lawyers are subject to special standards, regulation, and liability. Sometimes called legal ethics, sometimes professional responsibility, the topic is perhaps most comprehensively described as the law governing lawyers.
For lawyers in the US, the ABA Model Rules of Professional Conduct “serve as models for the ethics rules of most jurisdictions”.
Discovery, as defined by the ABA Division for Public Education, is “the formal process of exchanging information between the parties about the witnesses and evidence they’ll present at trial.” In US Federal courts, forms of discovery available to parties include:
eDiscovery, a subset of discovery, focuses on electronically stored information (ESI), as opposed to information stored on paper, in people’s heads, or in tangible documents. eDiscovery is the process of finding, using, and managing that information, narrowly in the context of litigation and more broadly for any form of investigation or dispute resolution. The general eDiscovery process is described in the EDRM model, a conceptual depiction of the key steps in that process and the relationships between those steps. Federal rules that implicate eDiscovery include:
(For a handy app with links to these rules and more, download the Reed Smith E-Discovery App, available via Apple’s App Store and Google Play.)
Definitions of artificial intelligence abound.
John McCarthy, a Dartmouth and Stanford computer scientist widely credited for having coined the phrase “artificial intelligence” in 1955, defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programs.”
Merriam-Webster defines artificial intelligence as a “branch of computer science dealing with the simulation of intelligent behavior in computers” and “the capability of a machine to imitate intelligent human behavior.”
The ABA House of Delegates, in Resolution 112, adopted in 2019, offered this definition (citations removed):
Artificial intelligence has been defined as “the capability of a machine to imitate intelligent human behavior.” Others have defined it as “cognitive computing” or “machine learning.” Although there are many descriptive terms used, AI at its core encompasses tools that are trained rather than programmed. It involves teaching computers how to perform tasks that typically require human intelligence such as perception, pattern recognition, and decision-making.
AI can be divided into categories. One common view divides AI into four subsets: Reactive, limited memory, theory of mind, and self-aware. Reactive AI is where AI “is programmed to provide a predictable output based on the input it receives.” Limited memory AI “learns from the past and builds experiential knowledge by observing actions or data.” Theory of mind AI, still a theory rather than a product, will be available when “machines will acquire true decision-making capabilities that are similar to humans.” Self-aware AI also has not yet been achieved: “When machines can be aware of their own emotions, as well as the emotions of others around them, they will have a level of consciousness and intelligence similar to human beings.”
A more pragmatic approach organizes AI into four functional groups: machine learning, natural language processing, computer visions, and robotics. I discussed this in an earlier post, Legal AI Software: Taking Document Review to the Next Level.
Perhaps the easiest way of understanding what AI means in the context of eDisovery is to consider ways in which AI has been deployed in platforms such as Reveal’s. Examples include:
In the next post, I will examine ethical frameworks and related materials that may provide guidance, including the ABA Resolution 112 mentioned above, the ABA Model Rules of Professional Conduct, and the White House’s “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People”.