Alleviating Exposure Bias in Abstractive Summarization via Sequentially Generating and Revising

Jiaxin Duan, Fengyu Lu, Junfei Liu


Abstract
Abstractive summarization commonly suffers from exposure bias caused by supervised teacher-force learning, that a model predicts the next token conditioned on the accurate pre-context during training while on its preceding outputs at inference. Existing solutions bridge this gap through un- or semi-supervised holistic learning yet still leave the risk of error accumulation while generating a summary. In this paper, we attribute this problem to the limitation of unidirectional autoregressive text generation and introduce post-processing steps to alleviate it. Specifically, we reformat abstractive summarization to sequential generation and revision (SeGRe), i.e., a model in the revision phase re-inputs the generated summary and refines it by contrasting it with the source document. This provides the model additional opportunities to assess the flawed summary from a global view and thereby modify inappropriate expressions. Moreover, we train the SeGRe model with a regularized minimum-risk policy to ensure effective generation and revision. A lot of comparative experiments are implemented on two well-known datasets, exhibiting the new or matched state-of-the-art performance of SeGRe.
Anthology ID:
2024.lrec-main.66
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
739–750
Language:
URL:
https://aclanthology.org/2024.lrec-main.66
DOI:
Bibkey:
Cite (ACL):
Jiaxin Duan, Fengyu Lu, and Junfei Liu. 2024. Alleviating Exposure Bias in Abstractive Summarization via Sequentially Generating and Revising. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 739–750, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Alleviating Exposure Bias in Abstractive Summarization via Sequentially Generating and Revising (Duan et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.66.pdf