GUMSum: Multi-Genre Data and Evaluation for English Abstractive Summarization

Yang Janet Liu, Amir Zeldes


Abstract
Automatic summarization with pre-trained language models has led to impressively fluent results, but is prone to ‘hallucinations’, low performance on non-news genres, and outputs which are not exactly summaries. Targeting ACL 2023’s ‘Reality Check’ theme, we present GUMSum, a small but carefully crafted dataset of English summaries in 12 written and spoken genres for evaluation of abstractive summarization. Summaries are highly constrained, focusing on substitutive potential, factuality, and faithfulness. We present guidelines and evaluate human agreement as well as subjective judgments on recent system outputs, comparing general-domain untuned approaches, a fine-tuned one, and a prompt-based approach, to human performance. Results show that while GPT3 achieves impressive scores, it still underperforms humans, with varying quality across genres. Human judgments reveal different types of errors in supervised, prompted, and human-generated summaries, shedding light on the challenges of producing a good summary.
Anthology ID:
2023.findings-acl.593
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9315–9327
Language:
URL:
https://aclanthology.org/2023.findings-acl.593
DOI:
10.18653/v1/2023.findings-acl.593
Bibkey:
Cite (ACL):
Yang Janet Liu and Amir Zeldes. 2023. GUMSum: Multi-Genre Data and Evaluation for English Abstractive Summarization. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9315–9327, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
GUMSum: Multi-Genre Data and Evaluation for English Abstractive Summarization (Liu & Zeldes, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.593.pdf