EASE: Extractive-Abstractive Summarization End-to-End using the Information Bottleneck Principle

Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape, Yashar Mehdad, Sonal Gupta, Marjan Ghazvininejad


Abstract
Current abstractive summarization systems outperform their extractive counterparts, but their widespread adoption is inhibited by the inherent lack of interpretability. Extractive summarization systems, though interpretable, suffer from redundancy and possible lack of coherence. To achieve the best of both worlds, we propose EASE, an extractive-abstractive framework that generates concise abstractive summaries that can be traced back to an extractive summary. Our framework can be applied to any evidence-based text generation problem and can accommodate various pretrained models in its simple architecture. We use the Information Bottleneck principle to jointly train the extraction and abstraction in an end-to-end fashion. Inspired by previous research that humans use a two-stage framework to summarize long documents (Jing and McKeown, 2000), our framework first extracts a pre-defined amount of evidence spans and then generates a summary using only the evidence. Using automatic and human evaluations, we show that the generated summaries are better than strong extractive and extractive-abstractive baselines.
Anthology ID:
2021.newsum-1.10
Volume:
Proceedings of the Third Workshop on New Frontiers in Summarization
Month:
November
Year:
2021
Address:
Online and in Dominican Republic
Venues:
EMNLP | newsum
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
85–95
Language:
URL:
https://aclanthology.org/2021.newsum-1.10
DOI:
10.18653/v1/2021.newsum-1.10
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.newsum-1.10.pdf