%0 Conference Proceedings %T Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? %A Min, Sewon %A Lyu, Xinxi %A Holtzman, Ari %A Artetxe, Mikel %A Lewis, Mike %A Hajishirzi, Hannaneh %A Zettlemoyer, Luke %Y Goldberg, Yoav %Y Kozareva, Zornitsa %Y Zhang, Yue %S Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing %D 2022 %8 December %I Association for Computational Linguistics %C Abu Dhabi, United Arab Emirates %F min-etal-2022-rethinking %X Large language models (LMs) are able to in-context learn—perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of how the model learns and which aspects of the demonstrations contribute to end task performance. In this paper, we show that ground truth demonstrations are in fact not required—randomly replacing labels in the demonstrations barely hurts performance on a range of classification and multi-choce tasks, consistently over 12 different models including GPT-3. Instead, we find that other aspects of the demonstrations are the key drivers of endtask performance, including the fact that they provide a few examples of (1) the label space, (2) the distribution of the input text, and (3) the overall format of the sequence. Together, our analysis provides a new way of understanding how and why in-context learning works, while opening up new questions about how much can be learned from large language models through inference alone. %R 10.18653/v1/2022.emnlp-main.759 %U https://aclanthology.org/2022.emnlp-main.759 %U https://doi.org/10.18653/v1/2022.emnlp-main.759 %P 11048-11064