Aniket Deroy
2024
Rethinking Legal Judgement Prediction in a Realistic Scenario in the Era of Large Language Models
Shubham Kumar Nigam
|
Aniket Deroy
|
Subhankar Maity
|
Arnab Bhattacharya
Proceedings of the Natural Legal Language Processing Workshop 2024
This study investigates judgment prediction in a realistic scenario within the context of Indian judgments, utilizing a range of transformer-based models, including InLegalBERT, BERT, and XLNet, alongside LLMs such as Llama-2 and GPT-3.5 Turbo. In this realistic scenario, we simulate how judgments are predicted at the point when a case is presented for a decision in court, using only the information available at that time, such as the facts of the case, statutes, precedents, and arguments. This approach mimics real-world conditions, where decisions must be made without the benefit of hindsight, unlike retrospective analyses often found in previous studies. For transformer models, we experiment with hierarchical transformers and the summarization of judgment facts to optimize input for these models. Our experiments with LLMs reveal that GPT-3.5 Turbo excels in realistic scenarios, demonstrating robust performance in judgment prediction. Furthermore, incorporating additional legal information, such as statutes and precedents, significantly improves the outcome of the prediction task. The LLMs also provide explanations for their predictions. To evaluate the quality of these predictions and explanations, we introduce two human evaluation metrics: Clarity and Linking. Our findings from both automatic and human evaluations indicate that, despite advancements in LLMs, they are yet to achieve expert-level performance in judgment prediction and explanation tasks.
2023
Nonet at SemEval-2023 Task 6: Methodologies for Legal Evaluation
Shubham Kumar Nigam
|
Aniket Deroy
|
Noel Shallum
|
Ayush Kumar Mishra
|
Anup Roy
|
Shubham Kumar Mishra
|
Arnab Bhattacharya
|
Saptarshi Ghosh
|
Kripabandhu Ghosh
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
This paper describes our submission to the SemEval-2023 for Task 6 on LegalEval: Understanding Legal Texts. Our submission concentrated on three subtasks: Legal Named Entity Recognition (L-NER) for Task-B, Legal Judgment Prediction (LJP) for Task-C1, and Court Judgment Prediction with Explanation (CJPE) for Task-C2. We conducted various experiments on these subtasks and presented the results in detail, including data statistics and methodology. It is worth noting that legal tasks, such as those tackled in this research, have been gaining importance due to the increasing need to automate legal analysis and support. Our team obtained competitive rankings of 15th, 11th, and 1st in Task-B, Task-C1, and Task-C2, respectively, as reported on the leaderboard.
Search
Co-authors
- Shubham Kumar Nigam 2
- Arnab Bhattacharya 2
- Noel Shallum 1
- Ayush Kumar Mishra 1
- Anup Roy 1
- show all...