Discrete and Soft Prompting for Multilingual Models

Mengjie Zhao, Hinrich Schütze


Abstract
It has been shown for English that discrete and soft prompting perform strongly in few-shot learning with pretrained language models (PLMs). In this paper, we show that discrete and soft prompting perform better than finetuning in multilingual cases: Crosslingual transfer and in-language training of multilingual natural language inference. For example, with 48 English training examples, finetuning obtains 33.74% accuracy in crosslingual transfer, barely surpassing the majority baseline (33.33%). In contrast, discrete and soft prompting outperform finetuning, achieving 36.43% and 38.79%. We also demonstrate good performance of prompting with training data in multiple languages other than English.
Anthology ID:
2021.emnlp-main.672
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8547–8555
Language:
URL:
https://aclanthology.org/2021.emnlp-main.672
DOI:
10.18653/v1/2021.emnlp-main.672
Bibkey:
Cite (ACL):
Mengjie Zhao and Hinrich Schütze. 2021. Discrete and Soft Prompting for Multilingual Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8547–8555, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Discrete and Soft Prompting for Multilingual Models (Zhao & Schütze, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.672.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.672.mp4
Code
 mprompting/xlmrprompt
Data
MultiNLI