Language Models are Few-shot Multilingual Learners

Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, Pascale Fung


Abstract
General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones. Finally, we find the in-context few-shot cross-lingual prediction results of language models are significantly better than random prediction, and they are competitive compared to the existing state-of-the-art cross-lingual models and translation models.
Anthology ID:
2021.mrl-1.1
Volume:
Proceedings of the 1st Workshop on Multilingual Representation Learning
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venues:
EMNLP | MRL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–15
Language:
URL:
https://aclanthology.org/2021.mrl-1.1
DOI:
10.18653/v1/2021.mrl-1.1
Bibkey:
Cite (ACL):
Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, and Pascale Fung. 2021. Language Models are Few-shot Multilingual Learners. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 1–15, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Language Models are Few-shot Multilingual Learners (Winata et al., MRL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.mrl-1.1.pdf
Code
 gentaiscool/few-shot-lm
Data
MultiNLISNIPS