Composing Elementary Discourse Units in Abstractive Summarization

Zhenwen Li, Wenhao Wu, Sujian Li


Abstract
In this paper, we argue that elementary discourse unit (EDU) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization. To well handle the problem of composing EDUs into an informative and fluent summary, we propose a novel summarization method that first designs an EDU selection model to extract and group informative EDUs and then an EDU fusion model to fuse the EDUs in each group into one sentence. We also design the reinforcement learning mechanism to use EDU fusion results to reward the EDU selection action, boosting the final summarization performance. Experiments on CNN/Daily Mail have demonstrated the effectiveness of our model.
Anthology ID:
2020.acl-main.551
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6191–6196
Language:
URL:
https://aclanthology.org/2020.acl-main.551
DOI:
10.18653/v1/2020.acl-main.551
Bibkey:
Cite (ACL):
Zhenwen Li, Wenhao Wu, and Sujian Li. 2020. Composing Elementary Discourse Units in Abstractive Summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6191–6196, Online. Association for Computational Linguistics.
Cite (Informal):
Composing Elementary Discourse Units in Abstractive Summarization (Li et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.551.pdf
Video:
 http://slideslive.com/38928858