Cross-Task Knowledge Transfer for Query-Based Text Summarization

Elozino Egonmwan, Vittorio Castelli, Md Arafat Sultan


Abstract
We demonstrate the viability of knowledge transfer between two related tasks: machine reading comprehension (MRC) and query-based text summarization. Using an MRC model trained on the SQuAD1.1 dataset as a core system component, we first build an extractive query-based summarizer. For better precision, this summarizer also compresses the output of the MRC model using a novel sentence compression technique. We further leverage pre-trained machine translation systems to abstract our extracted summaries. Our models achieve state-of-the-art results on the publicly available CNN/Daily Mail and Debatepedia datasets, and can serve as simple yet powerful baselines for future systems. We also hope that these results will encourage research on transfer learning from large MRC corpora to query-based summarization.
Anthology ID:
D19-5810
Volume:
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
72–77
Language:
URL:
https://aclanthology.org/D19-5810
DOI:
10.18653/v1/D19-5810
Bibkey:
Cite (ACL):
Elozino Egonmwan, Vittorio Castelli, and Md Arafat Sultan. 2019. Cross-Task Knowledge Transfer for Query-Based Text Summarization. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 72–77, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Cross-Task Knowledge Transfer for Query-Based Text Summarization (Egonmwan et al., 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5810.pdf