Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning

Zhe Yang, Damai Dai, Peiyi Wang, Zhifang Sui


Abstract
Large Language Models (LLMs) have recently gained the In-Context Learning (ICL) ability with the models scaling up, allowing them to quickly adapt to downstream tasks with only a few demonstration examples prepended in the input sequence. Nonetheless, the current practice of ICL treats all demonstration examples equally, which still warrants improvement, as the quality of examples is usually uneven. In this paper, we investigate how to determine approximately optimal weights for demonstration examples and how to apply them during ICL. To assess the quality of weights in the absence of additional validation data, we design a masked self-prediction (MSP) score that exhibits a strong correlation with the final ICL performance. To expedite the weight-searching process, we discretize the continuous weight space and adopt beam search. With approximately optimal weights obtained, we further propose two strategies to apply them to demonstrations at different model positions. Experimental results on 8 text classification tasks show that our approach outperforms conventional ICL by a large margin. Our code are publicly available at https:github.com/Zhe-Young/WICL.
Anthology ID:
2023.findings-emnlp.880
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13209–13221
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.880
DOI:
10.18653/v1/2023.findings-emnlp.880
Bibkey:
Cite (ACL):
Zhe Yang, Damai Dai, Peiyi Wang, and Zhifang Sui. 2023. Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13209–13221, Singapore. Association for Computational Linguistics.
Cite (Informal):
Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning (Yang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.880.pdf