Home » AI & Legal Tech

AI & Legal Tech

Zoe Chan, Anika Legal

Anika Legal is an Australian not-for-profit organization that fights with renters for housing justice. We fight for a world where renters can thrive in their homes.

Since launching five years ago, Anika has experienced substantial growth that is people focused and powered by a tech-enabled focus. As a result, despite only employing two lawyers (one of whom has only been with Anika since 2023), we have provided over 800 renters with ongoing casework support, followed a data driven approach to continually advocate for systemic change amidst a Housing Crisis, and worked with over 200 law students to fight for a fairer world for renters.

We will share the impact of Anika Legal’s technology-driven model, highlight our innovative approaches, and share key lessons learned in enhancing access to justice.

Impact of Technology

Anika leverages technology to address the housing crisis and bridge the justice gap. Our bespoke digital case management platform enables us to celebrate and center remote and flexible work – thereby creating a more robust support system for renters using untapped legal resources. This flexibility has allowed us to assist over 800 renters, with services ranging from bond recovery to eviction support. This tech-enabled approach not only enhances our service delivery, but also strengthens our advocacy for systemic reforms in the housing sector.

Our advocacy efforts are fueled by the evidence gathered through our casework. The data collected through our platform helps us identify trends and systemic issues, enabling us to advocate effectively for policy changes. To this end, our platform optimizes the collection of data without additional administrative burden from either our clients or our workers, which feeds into our advocacy initiatives.

Our presentation will discuss how our bespoke platform operates and facilitates the efficient delivery of legal services. We will share our journey in developing our digital infrastructure with very limited funding. We will share how tech-enabled solutions can be harnessed – even in a low resource environment – and hear about the lessons we’ve learned along the way, such as:

  • Lo-fi tech solutions can often be used to quickly iterate practice processes, without the need to seek additional tech upgrade funding.
  • Codesigning practice improvements with core user groups enable quick iteration and optimizes change management
  • Practice design centered on the right users can unlock untapped human resources.

Daniel Escott, Osgoode Hall Law School
Michael Litchfield, University of Victoria

In this presentation, we will explore the role of user-centric policy design in managing the risks associated with the deployment of artificial intelligence (AI) in the legal sector. AI is transforming legal practice and has the potential to increase access to justice, by increasing efficiency in areas such as case management, research, and decision-making. However, its integration also raises significant concerns. To address these risks, we propose a user-centric approach to policy development that places the needs and experiences of key stakeholders—such as litigants, legal professionals, and court administrators—at the center of AI governance.

Drawing on Canada’s proposed Artificial Intelligence and Data Act (AIDA), existing laws as well as developing international norms, the presentation will emphasize the importance of transparency, fairness, and accountability in AI systems. We will also examine the Federal Court’s interim principles on AI usage as a leading example of how user-centric policies can be implemented in practice. The presentation will argue that engaging stakeholders through ongoing feedback and implementing a strong risk management framework are vital for creating policies that both mitigate AI risks and maintain the integrity of the legal system. By focusing on user needs, legal institutions can ensure that AI enhances, rather than undermines, access to justice and the fairness of legal processes.

Nye Thomas, Law Commission of Ontario

Summary

AI systems offer significant potential benefits to governments, the private sector, and the public. Many believe that these tools can “crack the code of mass adjudication”, improve access to justice, improve public and private services, and reduce backlogs.

At the same time, public and private sector use of AI is controversial. There are many examples of AI systems that have proven to be biased, illegal, secretive, or ineffective.

In response to these risks, governments around the world are adopting “Trustworthy AI” frameworks to

assure the public that AI development and use will be transparent, legal, and beneficial.

‘Trustworthy AI” legislation and policies are advancing quickly, but inconsistently and incompletely.

This presentation will consider whether current approaches to AI regulation are materially advancing access to justice and human rights for low-income and vulnerable communities. The presentation will highlight themes or issues that access to justice advocates should consider when evaluating AI regulatory proposals in their respective jurisdictions.

Background

Achieving access to justice and “Trustworthy AI” depends on a complex series of policy, legal and operational questions that go far beyond public statements of principle.

Governments have adopted very different approaches to AI regulation. Canada was a pioneer in AI regulation and adopted one of the world’s first government AI algorithmic impact assessments.

The EU’s recently passed Artificial Intelligence Act and Canada’s proposed federal Artificial Intelligence and Data Act (AIDA) are examples of national or “horizontal” AI regulation. In contrast, other jurisdictions have enacted targeted or sectoral legislation or policies to govern AI in specific locations or contexts. For example, there are many US federal, state or local legislation/policies targeting specific AI applications or technologies, such as New York City’s legislation governing employment AI systems and

the more than 20 U.S. jurisdictions that have banned or restricted the use of police facial recognition systems.

The different approaches to AI regulation can have significant implications for access to justice and human rights for low-income and vulnerable communities. For example, there is a wide divergence in whether, or how, enforcement and remedies are addressed in AI governance frameworks.

The presentation will consider AI regulation from the perspective of important access to justice issues and principles, including:

  • What is AI’s potential to advance or hinder access to justice?
  • What are examples of AI systems affecting access to justice?
  • What are the emerging themes and gaps of AI regulatory models?
  • Are AI regulations advancing access to justice and human rights for low-income and vulnerable communities?
  • What are the key regulatory issues and choices?

The panel will be moderated by Nye Thomas, LCO Executive Director. Panelists will include LCO Policy Counsel Susie Lindsay (author of the LCO’s 2022 Accountable AI report) and two external experts.

David Wiseman, University of Ottawa
Julie Mathews, Community Legal Education Ontario

This presentation will explain the recommendations arising from a research report examining traditional and alternative regulatory approaches to the development of smart legal forms for priority justice-seekers. 

We use the term “smart legal forms” to refer to dynamic digital tools that enable people to complete legal documents online, with or without assistance. These customized computer software technology tools enable the general public to generate legal (or law-related) forms and documents for legal actions and transactions, including for legal dispute resolution processes. We focus on the goal of increasing the availability of smart legal forms in areas of civil justice for people who frequently experience law-related problems that affect their basic human needs. We refer to these people as “priority justice-seekers” and regard improvements in the extent to which they can access justice as advancements in “community justice.”

After surveying research on the current landscape of technological tools offered for addressing law-related needs, we identify three key elements of smart legal forms that are effective for priority justice-seekers. Such forms: are accessible and actionable; embody ethical principles and appropriate protections, and are responsive and relevant to the needs and context of priority justice-seekers.

Providers of smart legal forms are vulnerable to being regarded by legal regulators as providers of legal services and, in turn, are vulnerable to regulatory intervention on the basis of engaging in the unauthorized practice of law.  This potential for regulatory intervention is often justified by the need to protect the public from harm.  Yet legal services regulators generally also recognize a need to enable the development of technological tools to improve access to justice.  This has led to the introduction of a spectrum of regulatory and non-regulatory approaches aimed at fostering this type of activity. An increasingly popular approach has become the creation of regulatory sandboxes. Examining the outcomes of the ongoing operation of regulatory sandboxes in Canadian and comparative jurisdictions, we conclude that this regulatory approach is failing to foster the development of smart legal forms for priority-justice seekers.

On the basis of our research into traditional and alternative regulatory approaches in this area, we ultimately propose an approach that reflects action on regulatory as well as on non-regulatory fronts: on the former, we suggest that the regulatory treatment of smart legal forms be adapted and targeted to the particular situation (including the need to support access and relative level of risk) and, on the latter, we propose proactive support to increase the availability of effective smart legal forms for priority justice-seekers.

Zach Zarnow, National Center for State Courts
Andy Wirkus, National Center for State Courts

Consumer debt cases make up a disproportionate percentage of filings in civil courts across the United States. Besides being a high-volume issue, consumer debt also has a large disparity in represented parties, a high number of default judgments, and extreme consequences for consumers. In this session, we will apply a practical lens to the ways courts can use technology tools to improve outcomes, increase access to justice, and ensure compliance with procedural requirements. We will explain our guiding principles of access, fairness, and accuracy in the development of a suite of tools designed to reduce negative outcomes in consumer debt cases.

The suite of tools we will highlight includes a Consumer Debt Collection Information bot (CODI) used in the Philadelphia Municipal court that provides custom information to pro se litigants, a debt reform checklist designed to help guide court’s decisions in implementing regulatory reforms, and a first look at a filing screening tool that will be used to review initial filings for compliance with procedural protections already in place in most jurisdictions. We will discuss how we built the actual tools while sharing the lessons learned and key takeaways to apply to any legal tech tool build.

This session will be practical and tangible, but will also help explore, though those concrete examples, this moment in time that represents a crucial juncture in creating frameworks to automate processes while not prejudicing the administration of justice. This session will explore relevant considerations to use in strategizing potential usages of AI and other technology. It will take the principles of ethical usage of generative AI and expand that to include considerations of non-generative AI tools.