Pavan Baswani


2023

pdf bib
LTRC_IIITH’s 2023 Submission for Prompting Large Language Models as Explainable Metrics Task
Pavan Baswani | Ananya Mukherjee | Manish Shrivastava
Proceedings of the 4th Workshop on Evaluation and Comparison of NLP Systems

In this report, we share our contribution to the Eval4NLP Shared Task titled “Prompting Large Language Models as Explainable Metrics.” We build our prompts with a primary focus on effective prompting strategies, score-aggregation, and explainability for LLM-based metrics. We participated in the track for smaller models by submitting the scores along with their explanations. According to the Kendall correlation scores on the leaderboard, our MT evaluation submission ranks second-best, while our summarization evaluation submission ranks fourth, with only a 0.06 difference from the leading submission.

pdf bib
LTRC at SemEval-2023 Task 6: Experiments with Ensemble Embeddings
Pavan Baswani | Hiranmai Sri Adibhatla | Manish Shrivastava
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)

In this paper, we present our team’s involvement in Task 6: LegalEval: Understanding Legal Texts. The task comprised three subtasks, and we focus on subtask A: Rhetorical Roles prediction. Our approach included experimenting with pre-trained embeddings and refining them with statistical and neural classifiers. We provide a thorough examination ofour experiments, solutions, and analysis, culminating in our best-performing model and current progress. We achieved a micro F1 score of 0.6133 on the test data using fine-tuned LegalBERT embeddings.

pdf bib
Fine-grained Contract NER using instruction based mode
Hiranmai Adibhatla | Pavan Baswani | Manish Shrivastava
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

2022

pdf bib
TeSum: Human-Generated Abstractive Summarization Corpus for Telugu
Ashok Urlana | Nirmal Surange | Pavan Baswani | Priyanka Ravva | Manish Shrivastava
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Expert human annotation for summarization is definitely an expensive task, and can not be done on huge scales. But with this work, we show that even with a crowd sourced summary generation approach, quality can be controlled by aggressive expert informed filtering and sampling-based human evaluation. We propose a pipeline that crowd-sources summarization data and then aggressively filters the content via: automatic and partial expert evaluation. Using this pipeline we create a high-quality Telugu Abstractive Summarization dataset (TeSum) which we validate with sampling-based human evaluation. We also provide baseline numbers for various models commonly used for summarization. A number of recently released datasets for summarization, scraped the web-content relying on the assumption that summary is made available with the article by the publishers. While this assumption holds for multiple resources (or news-sites) in English, it should not be generalised across languages without thorough analysis and verification. Our analysis clearly shows that this assumption does not hold true for most Indian language news resources. We show that our proposed filtration pipeline can even be applied to these large-scale scraped datasets to extract better quality article-summary pairs.