An Anchor-Based Automatic Evaluation Metric for Document Summarization

Kexiang Wang, Tianyu Liu, Baobao Chang, Zhifang Sui


Abstract
The widespread adoption of reference-based automatic evaluation metrics such as ROUGE has promoted the development of document summarization. In this paper, we consider a new protocol for designing reference-based metrics that require the endorsement of source document(s). Following protocol, we propose an anchored ROUGE metric fixing each summary particle on source document, which bases the computation on more solid ground. Empirical results on benchmark datasets validate that source document helps to induce a higher correlation with human judgments for ROUGE metric. Being self-explanatory and easy-to-implement, the protocol can naturally foster various effective designs of reference-based metrics besides the anchored ROUGE introduced here.
Anthology ID:
2020.coling-main.500
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5696–5701
Language:
URL:
https://aclanthology.org/2020.coling-main.500
DOI:
10.18653/v1/2020.coling-main.500
Bibkey:
Cite (ACL):
Kexiang Wang, Tianyu Liu, Baobao Chang, and Zhifang Sui. 2020. An Anchor-Based Automatic Evaluation Metric for Document Summarization. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5696–5701, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
An Anchor-Based Automatic Evaluation Metric for Document Summarization (Wang et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.500.pdf