BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization

Kai Wang, Xiaojun Quan, Rui Wang


Abstract
The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset are conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art.
Anthology ID:
P19-1207
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2153–2162
Language:
URL:
https://aclanthology.org/P19-1207
DOI:
10.18653/v1/P19-1207
Bibkey:
Cite (ACL):
Kai Wang, Xiaojun Quan, and Rui Wang. 2019. BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2153–2162, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization (Wang et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1207.pdf
Code
 InitialBug/BiSET