Oliver Wendell Holmes, Jr. once described the law as nothing more than “prophecies of what the courts will do in fact.” If the practice of law is largely an exercise in fortune-telling, Benjamin Alarie believes that computers are very good at reading tea leaves.
Alarie is the Osler Chair in Business Law at the University of Toronto and the CEO of Blue J Legal, a company which develops software that uses machine learning to analyze and apply legal precedents. He believes that properly trained algorithms will make more timely, cost-effective, and accurate decisions than human regulators, lawyers, and judges.
Alarie’s recent presentation at Osgoode Hall Law School was based on his paper, Regulation by Machine. In both the paper and the presentation, he used the example of classifying workers as either employees or independent contractors to argue not only for the possibility of computers applying existing legal precedents but also for allowing them to evolve the law. He asserts that computer algorithms could eliminate biases and resolve problems with the existing specification of the law.
Machine Learning is a field of computer programming in which computers learn from the data, rather than being explicitly programmed to create a particular outcome. Instead of having each step in decision-making pre-determined by a programmer, the software analyzes existing data for similar decisions and uses statistical analysis to find patterns in that data. It then applies the patterns to unseen data to make predictions about that new data.
Alarie’s presentation addressed the classification of workers as either employees or contractors for income tax purposes. This is a common problem in both employment and tax law, for which the CRA has published extensive guidelines. Classification involves weighing a variety of factors and does not always have a single clear answer.
Alarie and his colleagues found hundreds of Tax Court of Canada decisions involving a worker’s classification. They divided the data into two sets. The larger subset was used to train their software, which determined what weight judges gave to each of the factors. The smaller subset was then used to test the software, by asking it to predict the result based on facts from cases it had not yet seen. It provided a prediction – either employee or contractor – and a percentage level of confidence in the prediction.
They then compared those predictions to the actual findings of the Tax Court in the tested cases. They reported that the software correctly predicted the Tax Court’s decision in over 90% of cases. The erroneous predictions came with low confidence ratings, indicating that the algorithm knows when it might not be correct.
Even more interesting than the legal prediction software itself were Alarie’s own predictions about how this kind of software will change the law itself.
The first step, which is already in progress, involves legal professionals using software to support their own research and the professional advice they offer clients. As algorithms become more reliable and engender greater trust from regulators and corporate users, their predictions will become the de facto standard for making a particular type of legal decision.
Furthermore, the predictions will be more consistent and cheaper to obtain than a trial decision. As well, predictions will be available ex ante, so users will be able to adjust their real world behaviour in order to align with the legal result they are seeking, rather than acting now and hoping that any future judgement will go their way.
Only in those fringe cases, where the algorithm cannot make a confident prediction, would an actual trial be necessary. The trial decisions in those cases can then be fed back into the algorithm to improve its accuracy in subsequent analyses.
Somewhere in the future, Alarie imagines a legal singularity: a time when statutes could be written to reference the results of particular machine learning algorithms as the de jure law of the land. The results might continue to evolve as a result of judicial review or legislative tweaking, but receiving a decision from an algorithm would have the same legal weight as receiving a decision from a trial judge.
A very active question period followed the presentation. Audience members questioned whether people would accept a decision-making process that has a known error rate. They also raised concerns about whether the algorithms would entrench systemic biases and legal errors that otherwise might be corrected in the case law.
Alarie envisions a technological future where computing power allows us to identify and overcome these problems. He also reminded the audience that we accept a significant level of error in a human-controlled legal system, and that we should not expect perfection from a computer-controlled system. It is enough that computer algorithms could substantially improve upon the performance of human decision makers.
All of which leaves only one outstanding question:
When computers take over the legal world, will any of us still have jobs?
Jacquilynne Schlesier is an IPilogue Editor and a JD Candidate at Osgoode Hall Law School.