Himani Shrotriya
2022
IndicNLG Benchmark: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
Aman Kumar
|
Himani Shrotriya
|
Prachi Sahu
|
Amogh Mishra
|
Raj Dabre
|
Ratish Puduppully
|
Anoop Kunchukuttan
|
Mitesh M. Khapra
|
Pratyush Kumar
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Natural Language Generation (NLG) for non-English languages is hampered by the scarcity of datasets in these languages. We present the IndicNLG Benchmark, a collection of datasets for benchmarking NLG for 11 Indic languages. We focus on five diverse tasks, namely, biography generation using Wikipedia infoboxes, news headline generation, sentence summarization, paraphrase generation and, question generation. We describe the created datasets and use them to benchmark the performance of several monolingual and multilingual baselines that leverage pre-trained sequence-to-sequence models. Our results exhibit the strong performance of multilingual language-specific pre-trained models, and the utility of models trained on our dataset for other related NLG tasks. Our dataset creation methods can be easily applied to modest-resource languages as they involve simple steps such as scraping news articles and Wikipedia infoboxes, light cleaning, and pivoting through machine translation data. To the best of our knowledge, the IndicNLG Benchmark is the first NLG benchmark for Indic languages and the most diverse multilingual NLG dataset, with approximately 8M examples across 5 tasks and 11 languages. The datasets and models will be publicly available.
IndicBART: A Pre-trained Model for Indic Natural Language Generation
Raj Dabre
|
Himani Shrotriya
|
Anoop Kunchukuttan
|
Ratish Puduppully
|
Mitesh Khapra
|
Pratyush Kumar
Findings of the Association for Computational Linguistics: ACL 2022
In this paper, we study pre-trained sequence-to-sequence models for a group of related languages, with a focus on Indic languages. We present IndicBART, a multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic languages and English. IndicBART utilizes the orthographic similarity between Indic scripts to improve transfer learning between similar Indic languages. We evaluate IndicBART on two NLG tasks: Neural Machine Translation (NMT) and extreme summarization. Our experiments on NMT and extreme summarization show that a model specific to related languages like IndicBART is competitive with large pre-trained models like mBART50 despite being significantly smaller. It also performs well on very low-resource translation scenarios where languages are not included in pre-training or fine-tuning. Script sharing, multilingual training, and better utilization of limited model capacity contribute to the good performance of the compact IndicBART model.
Search
Co-authors
- Raj Dabre 2
- Ratish Puduppully 2
- Anoop Kunchukuttan 2
- Mitesh M. Khapra 2
- Pratyush Kumar 2
- show all...