Close

The rise of AI in the legal industry: will justice prevail?

Robot-judge using AI to judge
An overburdened court system has led to a renewed interest in the use of AI in the legal industry. In fact, in Estonia, robot-judges are planning to be introduced by late-2019. However, how accurate and impartial can AI really be in making legal decisions?

Government might not be the most obvious place to look for advancements in the use of artificial intelligence (AI), but in Estonia — where about 22 percent of the country’s 1.3 million citizens are civil servants — efficiency is extremely critical.

To that end, Estonia’s Ministry of Justice is creating a “robot judge” with the capability to handle small claims disputes involving less than $8,000. According to a recent Wired report, Estonian robo-judges, planned for introduction later in 2019, will review documents from both sides of a dispute and render decisions that will be eligible for appeal before a human judge.

AI is nothing new to Estonia, where residents carry a national ID card that allows them to vote and file their taxes digitally, and the X-road (a digital infrastructure for data sharing) lets them log into the government’s digital portal to check on who has been accessing their information. The Estonian government says the country hasn’t faced a major data breach in nearly two decades, and officials now hope that an algorithm will help clear a growing backlog of cases faced by judges and court clerks.

An overburdened court system has led to a renewed interest in the use of AI in the legal industry. The aim? To allow human judges and lawyers to focus on more complicated cases, while AI takes up more basic matters, such as analyzing documents and data during the discovery process. But algorithms are increasingly being used to influence the outcome of actual cases.

AI and the application of the law: what’s the push?

The case for using AI in the legal industry hinges on one thing: the perception that machines have the ability to be more impartial than human beings and can analyze facts and influence judgments more objectively.

According to the Electronic Privacy Information Center (EPIC), criminal justice algorithms are being used across the U.S., utilizing personal data like age, sex, and employment history to accomplish things like recommend sentencing, set bail, and even to help determine verdicts.

The argument for using such systems is that they are not affected by human qualities like emotion, bias, error, irrationality, and fatigue, although efforts to introduce AI into the U.S. legal system have generated criticism.

In the U.S., AI has been mainly been used as a tool for human judges, although Chief Justice of the U.S. Supreme Court John Roberts has been quoted as saying that AI is already having a significant impact on the nation’s legal system. “It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things,” he said in a New York Times report.

How accurate is AI at predicting the outcome of real cases?

Several years ago, a team of University College London computer scientists developed a system that accurately predicted the outcome of actual human rights cases. The scientists trained a machine-learning algorithm on a set of court decisions involving torture, privacy, and degrading conduct, emphasizing the importance of particular words in a case.

Once trained, the artificial intelligence was applied to cases not yet decided and came to the same conclusion as the human judge about 79 percent of the time. In 2017, 199 years’ worth of U.S. Supreme Court decisions (28,009 cases) were analyzed, and AI predicted the outcome with an accuracy of more than 70 percent.

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system is being widely used to help determine a defendant’s risk of reoffending. But according to a study conducted by two Dartmouth College researchers, COMPAS was found to be no more accurate at forecasting an individual’s risk of recidivism than that of random volunteers with little or no criminal justice experience who were recruited from the Internet.

COMPAS is also the center of an ongoing debate concerning racial bias. In 2016, a ProPublica research team analyzed the algorithm’s assessments of more than 7,000 people who were arrested in Broward County, Florida. The researchers found that “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” while whites were much more likely to be considered low risk but went on to commit other crimes. But some questioned the analysis, arguing that COMPAS accurately predicted recidivism in both white and black offenders at comparable rates.

Impartial technology created by biased humans?

Although the benefits to using AI in the legal industry could be significant, when you look closely at the use of algorithms to make legal decisions, the technology may not be as impartial or consistent as it seems to be. In the end, these programs are still designed and programmed by humans who have all the implicit bias that society has endowed them with.

What do you think of using AI to make legal decisions? Tell us about it in the comments!

cyber security ebook

One Legal: Delightfully easy eFiling

One Legal Dashboard
Manage all your California and Nevada court filing from a single platform. Receive status updates and court-returned documents online while we handle all the logistics of getting your documents filed. Find out more about eFiling with One Legal now.
Contents
    Add a header to begin generating the table of contents

    More to explore

    What is One Legal?

    We’re California’s leading litigation services platform, offering eFiling, process serving, and courtesy copy delivery in all 58 California counties. Our simple, dependable platform is trusted by over 20,000 law firms to file and serve over a million cases each year.

    One Legal Dashboard