Learning with Rejection for Abstractive Text Summarization

Meng Cao, Yue Dong, Jingyi He, Jackie Chi Kit Cheung


Abstract
State-of-the-art abstractive summarization systems frequently hallucinate content that is not supported by the source document, mainly due to noise in the training dataset. Existing methods opt to drop the noisy samples or tokens from the training set entirely, reducing the effective training set size and creating an artificial propensity to copy words from the source. In this work, we propose a training objective for abstractive summarization based on rejection learning, in which the model learns whether or not to reject potentially noisy tokens. We further propose a regularized decoding objective that penalizes non-factual candidate summaries during inference by using the rejection probability learned during training. We show that our method considerably improves the factuality of generated summaries in automatic and human evaluations when compared to five baseline models, and that it does so while increasing the abstractiveness of the generated summaries.
Anthology ID:
2022.emnlp-main.663
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9768–9780
Language:
URL:
https://aclanthology.org/2022.emnlp-main.663
DOI:
10.18653/v1/2022.emnlp-main.663
Bibkey:
Cite (ACL):
Meng Cao, Yue Dong, Jingyi He, and Jackie Chi Kit Cheung. 2022. Learning with Rejection for Abstractive Text Summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9768–9780, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Learning with Rejection for Abstractive Text Summarization (Cao et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.663.pdf