AI and Legal Research


This article is from the fall 2023 issue of Hearsay, the semi-annual newsletter of the Wheat Law Library.

Blake Wilson

As a legal information professional and technology buff, I have always been fascinated by the intersection of law and technology. With the development of artificial intelligence (AI), it was simply a matter of time before this technology bumped into legal research. With the introduction of services such as ChatGPT, it must be noted that there are several issues and challenges that must be recognized as we move forward.

Quality and Reliability of Data: As the old saying goes, “you are what you eat,” and AI systems rely heavily on the quality and reliability of the data they are trained on. If the training data is biased, incomplete or inaccurate, it can lead to skewed or incorrect results. Cleaning and curating legal data for training AI models will be an ongoing task.

Bias and Fairness: AI will only be as fair and unbiased as the training data allows. This means AI very well could perpetuate existing societal biases and inequalities. In the context of legal research, this can lead to biased outcomes in areas like case law analysis, sentencing recommendations and more.

Lack of Contextual Understanding: As I tell my legal research students, the issue with research isn’t access, it’s analysis. Students spend two to three years sharpening their minds in law school to analyze legal issues. While AI can process and analyze large amounts of legal text, it will no doubt struggle with understanding the nuanced context and subtleties of legal language, historical legal changes or cultural shifts that impact legal interpretations. AI struggles with these aspects, as they typically operate based on patterns learned from data rather than a deep understanding of legal principles.

Ethical Considerations: The use of AI in legal research raises ethical questions about the role of technology in the legal profession. For instance, should AI be used to make decisions that have legal consequences, and if so, how can transparency and accountability be ensured?
Loss of Human Judgment: AI tools can provide efficient search results and insights, but they might lack the human judgment and legal expertise needed for critical analysis and decision-making.

Intellectual Property Concerns: As we have seen issues arise with AI created art, AI tools for legal research will likely run into copyright issues. Outside of primary sources, legal material is afforded the same protections as all created works. The use of copyrighted material in the training of AI models is incredibly complex.

User Understanding and Acceptance: Legal professionals might not fully understand how AI systems arrive at their conclusions, leading to potential skepticism or mistrust. Additionally, lawyers might be reluctant to adopt AI tools if they perceive them as a threat to their profession.
Privacy, Confidentiality & Security: Legal documents often contain sensitive and confidential information. Using AI for legal research raises concerns about data privacy and security. There are also questions regarding uploading client information to a third party and whether or not this breaks attorney/client privilege.

Cost and Access: Considering all of the factors listed above, particularly when dealing with copyright and data access, implementing AI solutions will likely come with substantial costs, making them less accessible to smaller law firms or individuals and, in turn, clients.
It’s important to note that efforts are being made to address these challenges. Researchers and practitioners are working on developing more transparent, interpretable, and fair AI systems for legal research.

Contact the Library

Circulation Desk
785-864-3026

Reference Questions
lawref@ku.edu
785-864-3025

Christopher Steadham, Director
csteadham@ku.edu