COSMIC: Mutual Information for Task-Agnostic Summarization Evaluation

Maxime Darrin, Philippe Formont, Jackie Cheung, Pablo Piantanida


Abstract
Assessing the quality of summarizers poses significant challenges—gold summaries are hard to obtain and their suitability depends on the use context of the summarization system. Who is the user of the system, and what do they intend to do with the summary? In response, we propose a novel task-oriented evaluation approach that assesses summarizers based on their capacity to produce summaries while preserving task outcomes. We theoretically establish both a lower and upper bound on the expected error rate of these tasks, which depends on the mutual information between source texts and generated summaries. We introduce COSMIC, a practical implementation of this metric, and demonstrate its strong correlation with human judgment-based metrics, as well as its effectiveness in predicting downstream task performance. Comparative analyses against established metrics like BERTScore and ROUGE highlight the competitive performance of COSMIC.
Anthology ID:
2024.acl-long.686
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12696–12717
Language:
URL:
https://aclanthology.org/2024.acl-long.686
DOI:
10.18653/v1/2024.acl-long.686
Bibkey:
Cite (ACL):
Maxime Darrin, Philippe Formont, Jackie Cheung, and Pablo Piantanida. 2024. COSMIC: Mutual Information for Task-Agnostic Summarization Evaluation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12696–12717, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
COSMIC: Mutual Information for Task-Agnostic Summarization Evaluation (Darrin et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.686.pdf