Understanding the Behaviour of Neural Abstractive Summarizers using Contrastive Examples

Krtin Kumar, Jackie Chi Kit Cheung


Abstract
Neural abstractive summarizers generate summary texts using a language model conditioned on the input source text, and have recently achieved high ROUGE scores on benchmark summarization datasets. We investigate how they achieve this performance with respect to human-written gold-standard abstracts, and whether the systems are able to understand deeper syntactic and semantic structures. We generate a set of contrastive summaries which are perturbed, deficient versions of human-written summaries, and test whether existing neural summarizers score them more highly than the human-written summaries. We analyze their performance on different datasets and find that these systems fail to understand the source text, in a majority of the cases.
Anthology ID:
N19-1396
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3949–3954
Language:
URL:
https://aclanthology.org/N19-1396
DOI:
10.18653/v1/N19-1396
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/N19-1396.pdf