Reference-free Summarization Evaluation via Semantic Correlation and Compression Ratio

Yizhu Liu, Qi Jia, Kenny Zhu


Abstract
A document can be summarized in a number of ways. Reference-based evaluation of summarization has been criticized for its inflexibility. The more sufficient the number of abstracts, the more accurate the evaluation results. However, it is difficult to collect sufficient reference summaries. In this paper, we propose a new automatic reference-free evaluation metric that compares semantic distribution between source document and summary by pretrained language models and considers summary compression ratio. The experiments show that this metric is more consistent with human evaluation in terms of coherence, consistency, relevance and fluency.
Anthology ID:
2022.naacl-main.153
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2109–2115
Language:
URL:
https://aclanthology.org/2022.naacl-main.153
DOI:
10.18653/v1/2022.naacl-main.153
Bibkey:
Cite (ACL):
Yizhu Liu, Qi Jia, and Kenny Zhu. 2022. Reference-free Summarization Evaluation via Semantic Correlation and Compression Ratio. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2109–2115, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Reference-free Summarization Evaluation via Semantic Correlation and Compression Ratio (Liu et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.153.pdf
Software:
 2022.naacl-main.153.software.zip
Video:
 https://aclanthology.org/2022.naacl-main.153.mp4
Code
 yizhuliu/summeval