%0 Conference Proceedings %T Cross-lingual Few-Shot Learning on Unseen Languages %A Winata, Genta %A Wu, Shijie %A Kulkarni, Mayank %A Solorio, Thamar %A Preotiuc-Pietro, Daniel %Y He, Yulan %Y Ji, Heng %Y Li, Sujian %Y Liu, Yang %Y Chang, Chua-Hui %S Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) %D 2022 %8 November %I Association for Computational Linguistics %C Online only %F winata-etal-2022-cross %X Large pre-trained language models (LMs) have demonstrated the ability to obtain good performance on downstream tasks with limited examples in cross-lingual settings. However, this was mostly studied for relatively resource-rich languages, where at least enough unlabeled data is available to be included in pre-training a multilingual language model. In this paper, we explore the problem of cross-lingual transfer in unseen languages, where no unlabeled data is available for pre-training a model. We use a downstream sentiment analysis task across 12 languages, including 8 unseen languages, to analyze the effectiveness of several few-shot learning strategies across the three major types of model architectures and their learning dynamics. We also compare strategies for selecting languages for transfer and contrast findings across languages seen in pre-training compared to those that are not. Our findings contribute to the body of knowledge on cross-lingual models for low-resource settings that is paramount to increasing coverage, diversity, and equity in access to NLP technology. We show that, in few-shot learning, linguistically similar and geographically similar languages are useful for cross-lingual adaptation, but taking the context from a mixture of random source languages is surprisingly more effective. We also compare different model architectures and show that the encoder-only model, XLM-R, gives the best downstream task performance. %U https://aclanthology.org/2022.aacl-main.59 %P 777-791