A Novel Wikipedia based Dataset for Monolingual and Cross-Lingual Summarization

Mehwish Fatima, Michael Strube


Abstract
Cross-lingual summarization is a challenging task for which there are no cross-lingual scientific resources currently available. To overcome the lack of a high-quality resource, we present a new dataset for monolingual and cross-lingual summarization considering the English-German pair. We collect high-quality, real-world cross-lingual data from Spektrum der Wissenschaft, which publishes human-written German scientific summaries of English science articles on various subjects. The generated Spektrum dataset is small; therefore, we harvest a similar dataset from the Wikipedia Science Portal to complement it. The Wikipedia dataset consists of English and German articles, which can be used for monolingual and cross-lingual summarization. Furthermore, we present a quantitative analysis of the datasets and results of empirical experiments with several existing extractive and abstractive summarization models. The results suggest the viability and usefulness of the proposed dataset for monolingual and cross-lingual summarization.
Anthology ID:
2021.newsum-1.5
Volume:
Proceedings of the Third Workshop on New Frontiers in Summarization
Month:
November
Year:
2021
Address:
Online and in Dominican Republic
Venues:
EMNLP | newsum
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
39–50
Language:
URL:
https://aclanthology.org/2021.newsum-1.5
DOI:
10.18653/v1/2021.newsum-1.5
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.newsum-1.5.pdf
Code
 mehwishfatimah/wsd
Data
WikiLingua