PrefScore: Pairwise Preference Learning for Reference-free Summarization Quality Assessment

Ge Luo, Hebi Li, Youbiao He, Forrest Sheng Bao


Abstract
Evaluating machine-generated summaries without a human-written reference summary has been a need for a long time. Inspired by preference labeling in existing work of summarization evaluation, we propose to judge summary quality by learning the preference rank of summaries using the Bradley-Terry power ranking model from inferior summaries generated by corrupting base summaries. Extensive experiments on several datasets show that our weakly supervised scheme can produce scores highly correlated with human ratings.
Anthology ID:
2022.coling-1.515
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5896–5903
Language:
URL:
https://aclanthology.org/2022.coling-1.515
DOI:
Bibkey:
Cite (ACL):
Ge Luo, Hebi Li, Youbiao He, and Forrest Sheng Bao. 2022. PrefScore: Pairwise Preference Learning for Reference-free Summarization Quality Assessment. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5896–5903, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
PrefScore: Pairwise Preference Learning for Reference-free Summarization Quality Assessment (Luo et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.515.pdf
Code
 nkwbtb/prefscore
Data
BigPatentBillSumNEWSROOM