Do Language Models Learn about Legal Entity Types during Pretraining?

Claire Barale, Michael Rovatsos, Nehal Bhuta


Abstract
Language Models (LMs) have proven their ability to acquire diverse linguistic knowledge during the pretraining phase, potentially serving as a valuable source of incidental supervision for downstream tasks. However, there has been limited research conducted on the retrieval of domain-specific knowledge, and specifically legal knowledge. We propose to explore the task of Entity Typing, serving as a proxy for evaluating legal knowledge as an essential aspect of text comprehension, and a foundational task to numerous downstream legal NLP applications. Through systematic evaluation and analysis and two types of prompting (cloze sentences and QA-based templates) and to clarify the nature of these acquired cues, we compare diverse types and lengths of entities both general and domain-specific entities, semantics or syntax signals, and different LM pretraining corpus (generic and legal-oriented) and architectures (encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2 performs well on certain entities and exhibits potential for substantial improvement with optimized prompt templates, (2) law-oriented LMs show inconsistent performance, possibly due to variations in their training corpus, (3) LMs demonstrate the ability to type entities even in the case of multi-token entities, (4) all models struggle with entities belonging to sub-domains of the law (5) Llama2 appears to frequently overlook syntactic cues, a shortcoming less present in BERT-based architectures.
Anthology ID:
2023.nllp-1.4
Volume:
Proceedings of the Natural Legal Language Processing Workshop 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Daniel Preoțiuc-Pietro, Catalina Goanta, Ilias Chalkidis, Leslie Barrett, Gerasimos (Jerry) Spanakis, Nikolaos Aletras
Venues:
NLLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25–37
Language:
URL:
https://aclanthology.org/2023.nllp-1.4
DOI:
10.18653/v1/2023.nllp-1.4
Bibkey:
Cite (ACL):
Claire Barale, Michael Rovatsos, and Nehal Bhuta. 2023. Do Language Models Learn about Legal Entity Types during Pretraining?. In Proceedings of the Natural Legal Language Processing Workshop 2023, pages 25–37, Singapore. Association for Computational Linguistics.
Cite (Informal):
Do Language Models Learn about Legal Entity Types during Pretraining? (Barale et al., NLLP-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.nllp-1.4.pdf