APPLS: Evaluating Evaluation Metrics for Plain Language Summarization

Yue Guo, Tal August, Gondy Leroy, Trevor Cohen, Lucy Wang


Abstract
While there has been significant development of models for Plain Language Summarization (PLS), evaluation remains a challenge. PLS lacks a dedicated assessment metric, and the suitability of text generation evaluation metrics is unclear due to the unique transformations involved (e.g., adding background explanations, removing jargon). To address these questions, our study introduces a granular meta-evaluation testbed, APPLS, designed to evaluate metrics for PLS. We identify four PLS criteria from previous work—informativeness, simplification, coherence, and faithfulness—and define a set of perturbations corresponding to these criteria that sensitive metrics should be able to detect. We apply these perturbations to extractive hypotheses for two PLS datasets to form our testbed. Using APPLS, we assess performance of 14 metrics, including automated scores, lexical features, and LLM prompt-based evaluations. Our analysis reveals that while some current metrics show sensitivity to specific criteria, no single method captures all four criteria simultaneously. We therefore recommend a suite of automated metrics be used to capture PLS quality along all relevant criteria. This work contributes the first meta-evaluation testbed for PLS and a comprehensive evaluation of existing metrics.
Anthology ID:
2024.emnlp-main.519
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9194–9211
Language:
URL:
https://aclanthology.org/2024.emnlp-main.519
DOI:
Bibkey:
Cite (ACL):
Yue Guo, Tal August, Gondy Leroy, Trevor Cohen, and Lucy Wang. 2024. APPLS: Evaluating Evaluation Metrics for Plain Language Summarization. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 9194–9211, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
APPLS: Evaluating Evaluation Metrics for Plain Language Summarization (Guo et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.519.pdf