Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement

Rosamond Thalken, Edward Stiglitz, David Mimno, Matthew Wilkens


Abstract
Generative language models (LMs) are increasingly used for document class-prediction tasks and promise enormous improvements in cost and efficiency. Existing research often examines simple classification tasks, but the capability of LMs to classify on complex or specialized tasks is less well understood. We consider a highly complex task that is challenging even for humans: the classification of legal reasoning according to jurisprudential philosophy. Using a novel dataset of historical United States Supreme Court opinions annotated by a team of domain experts, we systematically test the performance of a variety of LMs. We find that generative models perform poorly when given instructions (i.e. prompts) equal to the instructions presented to human annotators through our codebook. Our strongest results derive from fine-tuning models on the annotated dataset; the best performing model is an in-domain model, LEGAL-BERT. We apply predictions from this fine-tuned model to study historical trends in jurisprudence, an exercise that both aligns with prominent qualitative historical accounts and points to areas of possible refinement in those accounts. Our findings generally sound a note of caution in the use of generative LMs on complex tasks without fine-tuning and point to the continued relevance of human annotation-intensive classification methods.
Anthology ID:
2023.emnlp-main.575
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9252–9265
Language:
URL:
https://aclanthology.org/2023.emnlp-main.575
DOI:
10.18653/v1/2023.emnlp-main.575
Bibkey:
Cite (ACL):
Rosamond Thalken, Edward Stiglitz, David Mimno, and Matthew Wilkens. 2023. Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9252–9265, Singapore. Association for Computational Linguistics.
Cite (Informal):
Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement (Thalken et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.575.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.575.mp4