11. Technology and the Law

Technology and Bias

In recent years, the rise of artificial intelligence (AI) technology has branched into the legal field. By automating decision making practices, the goal is to expedite and lower the costs of less complex legal matters – both those that make it to court and those that don’t. However, it is important to acknowledge the limitations that these technologies present. One such limitation exists within a subset of AI, called machine learning.  Briefly, the way machine learning works is by taking data, inputting it into a system which can detect patterns within the data, and provide users with a desired output.

For example, if a person seeks to determine whether someone should be given a fine for trespassing, a machine learning system could analyze the facts of the case – including demographic characteristics, when the trespass occurred, why it occurred, how it occurred, etc. – and then determine whether a fine should be applied by comparing the facts with analogous facts from prior settled cases. If the circumstances are sufficiently analogous with the majority of cases where a fine was issued, then a fine will be applied. If the facts are not sufficiently analogous with such cases, no fine will be issued. Of course, this is a simplified explanation of the machine learning process, but in essence, these are the kinds of patterns that the AI system is looking for.

An important point of concern is that the data which the machine learning system uses to make its decisions will reinforce and perpetuate a legal system which is discriminatory and biased. Dr. Gideon Christian, a Canadian researcher exploring strategies to improve the use of AI in justice and legal settings explains some of the risks associated with indiscriminate faith in big data: “AI technologies are trained and rely on big data to make predictions. Some of this data is historical data from eras of mass incarceration, biased policing, and biased bail and sentencing regimes characterized by systemic discrimination against sections of society. Police practices such as stop-and-frisk or carding and street checks have been routinely criticized for disproportionately targeting young Black and Indigenous people, which have resulted in racially biased data that can influence AI tools trained with that data.” (https://ucalgary.ca/news/researcher-explores-ways-improve-ai-based-decision-making-justice-system)

This is particularly problematic when we consider the importance of integrating Indigenous methods of justice which have rarely been used in the Canadian legal system. A machine learning output which does not account for Indigenous perspectives is not only anti-progressive, but may perpetuate harmful, unfair, and unjust attitudes and outcomes.

Though there are tremendous benefits to automating parts of the Canadian legal system, we must be careful not to ignore the repercussions of AI. If the legal system wants to adopt AI into its practices, it must find a way to enforce the values of legal progress and adaptability, while also working to prevent racism and bias against the population’s most vulnerable citizens.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Business Law and Ethics Canadian Edition Copyright © 2023 by Craig Ervine is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book