Guiding Abstractive Dialogue Summarization with Content Planning

Ye Wang, Xiaojun Wan, Zhiping Cai


Abstract
Abstractive dialogue summarization has recently been receiving more attention. We propose a coarse-to-fine model for generating abstractive dialogue summaries, and introduce a fact-aware reinforcement learning (RL) objective that improves the fact consistency between the dialogue and the generated summary. Initially, the model generates the predicate-argument spans of the dialogue, and then generates the final summary through a fact-aware RL objective. Extensive experiments and analysis on two benchmark datasets demonstrate that our proposed method effectively improves the quality of the generated summary, especially in coherence and consistency.
Anthology ID:
2022.findings-emnlp.248
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3408–3413
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.248
DOI:
10.18653/v1/2022.findings-emnlp.248
Bibkey:
Cite (ACL):
Ye Wang, Xiaojun Wan, and Zhiping Cai. 2022. Guiding Abstractive Dialogue Summarization with Content Planning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3408–3413, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Guiding Abstractive Dialogue Summarization with Content Planning (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.248.pdf