Jointly Learning Guidance Induction and Faithful Summary Generation via Conditional Variational Autoencoders

Wang Xu, Tiejun Zhao


Abstract
Abstractive summarization can generate high quality results with the development of the neural network. However, generating factual consistency summaries is a challenging task for abstractive summarization. Recent studies extract the additional information with off-the-shelf tools from the source document as a clue to guide the summary generation, which shows effectiveness to improve the faithfulness. Unlike these work, we present a novel framework based on conditional variational autoencoders, which induces the guidance information and generates the summary equipment with the guidance synchronously. Experiments on XSUM and CNNDM dataset show that our approach can generate relevant and fluent summaries which is more faithful than the existing state-of-the-art approaches, according to multiple factual consistency metrics.
Anthology ID:
2022.findings-naacl.180
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2340–2350
Language:
URL:
https://aclanthology.org/2022.findings-naacl.180
DOI:
10.18653/v1/2022.findings-naacl.180
Bibkey:
Cite (ACL):
Wang Xu and Tiejun Zhao. 2022. Jointly Learning Guidance Induction and Faithful Summary Generation via Conditional Variational Autoencoders. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2340–2350, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Jointly Learning Guidance Induction and Faithful Summary Generation via Conditional Variational Autoencoders (Xu & Zhao, Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.180.pdf