Zhiping Cai
2022
Guiding Abstractive Dialogue Summarization with Content Planning
Ye Wang
|
Xiaojun Wan
|
Zhiping Cai
Findings of the Association for Computational Linguistics: EMNLP 2022
Abstractive dialogue summarization has recently been receiving more attention. We propose a coarse-to-fine model for generating abstractive dialogue summaries, and introduce a fact-aware reinforcement learning (RL) objective that improves the fact consistency between the dialogue and the generated summary. Initially, the model generates the predicate-argument spans of the dialogue, and then generates the final summary through a fact-aware RL objective. Extensive experiments and analysis on two benchmark datasets demonstrate that our proposed method effectively improves the quality of the generated summary, especially in coherence and consistency.
Search