Learning to Score System Summaries for Better Content Selection Evaluation.

Maxime Peyrard, Teresa Botschen, Iryna Gurevych


Abstract
The evaluation of summaries is a challenging but crucial task of the summarization field. In this work, we propose to learn an automatic scoring metric based on the human judgements available as part of classical summarization datasets like TAC-2008 and TAC-2009. Any existing automatic scoring metrics can be included as features, the model learns the combination exhibiting the best correlation with human judgments. The reliability of the new metric is tested in a further manual evaluation where we ask humans to evaluate summaries covering the whole scoring spectrum of the metric. We release the trained metric as an open-source tool.
Anthology ID:
W17-4510
Volume:
Proceedings of the Workshop on New Frontiers in Summarization
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Lu Wang, Jackie Chi Kit Cheung, Giuseppe Carenini, Fei Liu
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
74–84
Language:
URL:
https://aclanthology.org/W17-4510
DOI:
10.18653/v1/W17-4510
Bibkey:
Cite (ACL):
Maxime Peyrard, Teresa Botschen, and Iryna Gurevych. 2017. Learning to Score System Summaries for Better Content Selection Evaluation.. In Proceedings of the Workshop on New Frontiers in Summarization, pages 74–84, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Learning to Score System Summaries for Better Content Selection Evaluation. (Peyrard et al., 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-4510.pdf
Data
FrameNet