Towards Explainability in Legal Outcome Prediction Models

Josef Valvoda, Ryan Cotterell


Abstract
Current legal outcome prediction models - a staple of legal NLP - do not explain their reasoning. However, to employ these models in the real world, human legal actors need to be able to understand the model’s decisions. In the case of common law, legal practitioners reason towards the outcome of a case by referring to past case law, known as precedent. We contend that precedent is, therefore, a natural way of facilitating explainability for legal NLP models. In this paper, we contribute a novel method for identifying the precedent employed by legal outcome prediction models. Furthermore, by developing a taxonomy of legal precedent, we are able to compare human judges and neural models with respect to the different types of precedent they rely on. We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
Anthology ID:
2024.naacl-long.404
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7269–7289
Language:
URL:
https://aclanthology.org/2024.naacl-long.404
DOI:
10.18653/v1/2024.naacl-long.404
Bibkey:
Cite (ACL):
Josef Valvoda and Ryan Cotterell. 2024. Towards Explainability in Legal Outcome Prediction Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 7269–7289, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Towards Explainability in Legal Outcome Prediction Models (Valvoda & Cotterell, NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.404.pdf