Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning

Hongfu Liu, Ye Wang


Abstract
Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions. However, this particular learning paradigm suffers from high instability stemming from substantial variances induced by factors such as the input distribution of selected examples, their ordering, and prompt formats. In this work, we demonstrate that even when all these factors are held constant, the random selection of examples still results in high variance. Consequently, we aim to explore the informative ability of data examples by quantifying the Information Gain (IG) obtained in prediction after observing a given example candidate. Then we propose to sample those with maximum IG. Additionally, we identify the presence of template bias, which can lead to unfair evaluations of IG during the sampling process. To mitigate this bias, we introduce Calibration Before Sampling strategy. The experimental results illustrate that our proposed method can yield an average relative improvement of 14.3% across six classification tasks using three LLMs.
Anthology ID:
2023.findings-emnlp.1060
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15825–15838
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1060
DOI:
10.18653/v1/2023.findings-emnlp.1060
Bibkey:
Cite (ACL):
Hongfu Liu and Ye Wang. 2023. Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15825–15838, Singapore. Association for Computational Linguistics.
Cite (Informal):
Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning (Liu & Wang, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.1060.pdf