Sourish Dasgupta
2024
Are Large Language Models In-Context Personalized Summarizers? Get an iCOPERNICUS Test Done!
Divya Patel
|
Pathik Patel
|
Ankush Chander
|
Sourish Dasgupta
|
Tanmoy Chakraborty
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
2023
Accuracy is not enough: Evaluating Personalization in Summarizers
Rahul Vansh
|
Darsh Rank
|
Sourish Dasgupta
|
Tanmoy Chakraborty
Findings of the Association for Computational Linguistics: EMNLP 2023
Text summarization models are evaluated in terms of their accuracy and quality using various measures such as ROUGE, BLEU, METEOR, BERTScore, PYRAMID, readability, and several other recently proposed ones. The central objective of all accuracy measures is to evaluate the model’s ability to capture saliency accurately. Since saliency is subjective w.r.t the readers’ preferences, there cannot be a fit-all summary for a given document. This means that in many use-cases, summarization models need to be personalized w.r.t user-profiles. However, to our knowledge, there is no measure to evaluate the degree-of-personalization of a summarization model. In this paper, we first establish that existing accuracy measures cannot evaluate the degree of personalization of any summarization model, and then propose a novel measure, called EGISES, for automatically computing the same. Using the PENS dataset released by Microsoft Research, we analyze the degree of personalization of ten different state-of-the-art summarization models (both extractive and abstractive), five of which are explicitly trained for personalized summarization, and the remaining are appropriated to exhibit personalization. We conclude by proposing a generalized accuracy measure, called P-Accuracy, for designing accuracy measures that should also take personalization into account and demonstrate the robustness and reliability of the measure through meta-evaluation.
Search
Co-authors
- Tanmoy Chakraborty 2
- Divya Patel 1
- Pathik Patel 1
- Ankush Chander 1
- Rahul Vansh 1
- show all...