Sahan Jayasinghe
2022
Learning Sentence Embeddings In The Legal Domain with Low Resource Settings
Sahan Jayasinghe
|
Lakith Rambukkanage
|
Ashan Silva
|
Nisansa de Silva
|
Shehan Perera
|
Madhavi Perera
Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation
Legal Case Winning Party Prediction With Domain Specific Auxiliary Models
Sahan Jayasinghe
|
Lakith Rambukkanage
|
Ashan Silva
|
Nisansa de Silva
|
Amal Shehan Perera
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
Sifting through hundreds of old case documents to obtain information pertinent to the case in hand has been a major part of the legal profession for centuries. However, with the expansion of court systems and the compounding nature of case law, this task has become more and more intractable with time and resource constraints. Thus automation by Natural Language Processing presents itself as a viable solution. In this paper, we discuss a novel approach for predicting the winning party of a current court case by training an analytical model on a corpus of prior court cases which is then run on the prepared text on the current court case. This will allow legal professionals to efficiently and precisely prepare their cases to maximize the chance of victory. The model is built with and experimented using legal domain specific sub-models to provide more visibility to the final model, along with other variations. We show that our model with critical sentence annotation with a transformer encoder using RoBERTa based sentence embedding is able to obtain an accuracy of 75.75%, outperforming other models.
Search
Co-authors
- Lakith Rambukkanage 2
- Ashan Silva 2
- Nisansa De Silva 2
- Shehan Perera 1
- Madhavi Perera 1
- show all...