Anurag Sharma
2024
Legal Judgment Reimagined: PredEx and the Rise of Intelligent AI Interpretation in Indian Courts
Shubham Kumar Nigam
|
Anurag Sharma
|
Danush Khanna
|
Noel Shallum
|
Kripabandhu Ghosh
|
Arnab Bhattacharya
Findings of the Association for Computational Linguistics: ACL 2024
In the era of Large Language Models (LLMs), predicting judicial outcomes poses significant challenges due to the complexity of legal proceedings and the scarcity of expert-annotated datasets. Addressing this, we introduce Prediction with Explanation (PredEx), the largest expert-annotated dataset for legal judgment prediction and explanation in the Indian context, featuring over 15,000 annotations. This groundbreaking corpus significantly enhances the training and evaluation of AI models in legal analysis, with innovations including the application of instruction tuning to LLMs. This method has markedly improved the predictive accuracy and explanatory depth of these models for legal judgments. We employed various transformer-based models, tailored for both general and Indian legal contexts. Through rigorous lexical, semantic, and expert assessments, our models effectively leverage PredEx to provide precise predictions and meaningful explanations, establishing it as a valuable benchmark for both the legal profession and the NLP community.
2023
LLMs – the Good, the Bad or the Indispensable?: A Use Case on Legal Statute Prediction and Legal Judgment Prediction on Indian Court Cases
Shaurya Vats
|
Atharva Zope
|
Somsubhra De
|
Anurag Sharma
|
Upal Bhattacharya
|
Shubham Kumar Nigam
|
Shouvik Guha
|
Koustav Rudra
|
Kripabandhu Ghosh
Findings of the Association for Computational Linguistics: EMNLP 2023
The Large Language Models (LLMs) have impacted many real-life tasks. To examine the efficacy of LLMs in a high-stake domain like law, we have applied state-of-the-art LLMs for two popular tasks: Statute Prediction and Judgment Prediction, on Indian Supreme Court cases. We see that while LLMs exhibit excellent predictive performance in Statute Prediction, their performance dips in Judgment Prediction when compared with many standard models. The explanations generated by LLMs (along with prediction) are of moderate to decent quality. We also see evidence of gender and religious bias in the LLM-predicted results. In addition, we present a note from a senior legal expert on the ethical concerns of deploying LLMs in these critical legal tasks.
Search
Co-authors
- Shubham Kumar Nigam 2
- Kripabandhu Ghosh 2
- Shaurya Vats 1
- Atharva Zope 1
- Somsubhra De 1
- show all...