Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection

Sihao Chen, Fan Zhang, Kazoo Sone, Dan Roth


Abstract
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.
Anthology ID:
2021.naacl-main.475
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5935–5941
Language:
URL:
https://aclanthology.org/2021.naacl-main.475
DOI:
10.18653/v1/2021.naacl-main.475
Bibkey:
Cite (ACL):
Sihao Chen, Fan Zhang, Kazoo Sone, and Dan Roth. 2021. Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5935–5941, Online. Association for Computational Linguistics.
Cite (Informal):
Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection (Chen et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.475.pdf
Video:
 https://aclanthology.org/2021.naacl-main.475.mp4