Rohit Saha
2024
LegalLens Shared Task 2024: Legal Violation Identification in Unstructured Text
Ben Hagag
|
Gil Gil Semo
|
Dor Bernsohn
|
Liav Harpaz
|
Pashootan Vaezipoor
|
Rohit Saha
|
Kyryl Truskovskyi
|
Gerasimos Spanakis
Proceedings of the Natural Legal Language Processing Workshop 2024
This paper presents the results of the LegalLens Shared Task, focusing on detecting legal violations within text in the wild across two sub-tasks: LegalLens-NER for identifying legal violation entities and LegalLens-NLI for associating these violations with relevant legal contexts and affected individuals. Using an enhanced LegalLens dataset covering labor, privacy, and consumer protection domains, 38 teams participated in the task. Our analysis reveals that while a mix of approaches was used, the top-performing teams in both tasks consistently relied on fine-tuning pre-trained language models, outperforming legal-specific models and few-shot methods. The top-performing team achieved a 7.11% improvement in NER over the baseline, while NLI saw a more marginal improvement of 5.7%. Despite these gains, the complexity of legal texts leaves room for further advancements.
LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text
Dor Bernsohn
|
Gil Semo
|
Yaron Vazana
|
Gila Hayat
|
Ben Hagag
|
Joel Niklaus
|
Rohit Saha
|
Kyryl Truskovskyi
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
In this study, we focus on two main tasks, the first for detecting legal violations within unstructured textual data, and the second for associating these violations with potentially affected individuals. We constructed two datasets using Large Language Models (LLMs) which were subsequently validated by domain expert annotators. Both tasks were designed specifically for the context of class-action cases. The experimental design incorporated fine-tuning models from the BERT family and open-source LLMs, and conducting few-shot experiments using closed-source LLMs. Our results, with an F1-score of 62.69% (violation identification) and 81.02% (associating victims), show that our datasets and setups can be used for both tasks. Finally, we publicly release the datasets and the code used for the experiments in order to advance further research in the area of legal natural language processing (NLP).
Search
Co-authors
- Ben Hagag 2
- Dor Bernsohn 2
- Kyryl Truskovskyi 2
- Gil Gil Semo 1
- Liav Harpaz 1
- show all...