Supriti Vijay
2024
Can Abstract Meaning Representation Facilitate Fair Legal Judgement Predictions?
Supriti Vijay
|
Daniel Hershcovich
Proceedings of the Fifth Workshop on Insights from Negative Results in NLP
Legal judgment prediction encompasses the automated prediction of case outcomes by leveraging historical facts and opinions. While this approach holds the potential to enhance the efficiency of the legal system, it also raises critical concerns regarding the perpetuation of biases. Abstract Meaning Representation has shown promise as an intermediate text representation in various downstream NLP tasks due to its ability to capture semantically meaningful information in a graph-like structure. In this paper, we employ this ability of AMR in the legal judgement prediction task and assess to what extent it encodes biases, or conversely, abstracts away from them. Our study reveals that while AMR-based models exhibit worse overall performance than transformer-based models, they are less biased for attributes like age and defendant state compared to gender. By shedding light on these findings, this paper contributes to a more nuanced understanding of AMR’s potential benefits and limitations in legal NLP.
2021
“Something Something Hota Hai!” An Explainable Approach towards Sentiment Analysis on Indian Code-Mixed Data
Aman Priyanshu
|
Aleti Vardhan
|
Sudarshan Sivakumar
|
Supriti Vijay
|
Nipuna Chhabra
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
The increasing use of social media sites in countries like India has given rise to large volumes of code-mixed data. Sentiment analysis of this data can provide integral insights into people’s perspectives and opinions. Code-mixed data is often noisy in nature due to multiple spellings for the same word, lack of definite order of words in a sentence, and random abbreviations. Thus, working with code-mixed data is more challenging than monolingual data. Interpreting a model’s predictions allows us to determine the robustness of the model against different forms of noise. In this paper, we propose a methodology to integrate explainable approaches into code-mixed sentiment analysis. By interpreting the predictions of sentiment analysis models we evaluate how well the model is able to adapt to the implicit noises present in code-mixed data.
Search