On the Abstractiveness of Neural Document Summarization

Fangfang Zhang, Jin-ge Yao, Rui Yan


Abstract
Many modern neural document summarization systems based on encoder-decoder networks are designed to produce abstractive summaries. We attempted to verify the degree of abstractiveness of modern neural abstractive summarization systems by calculating overlaps in terms of various types of units. Upon the observation that many abstractive systems tend to be near-extractive in practice, we also implemented a pure copy system, which achieved comparable results as abstractive summarizers while being far more computationally efficient. These findings suggest the possibility for future efforts towards more efficient systems that could better utilize the vocabulary in the original document.
Anthology ID:
D18-1089
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
785–790
Language:
URL:
https://aclanthology.org/D18-1089
DOI:
10.18653/v1/D18-1089
Bibkey:
Cite (ACL):
Fangfang Zhang, Jin-ge Yao, and Rui Yan. 2018. On the Abstractiveness of Neural Document Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 785–790, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
On the Abstractiveness of Neural Document Summarization (Zhang et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1089.pdf
Data
CNN/Daily Mail