Bayesian Example Selection Improves In-Context Learning for Speech, Text and Visual Modalities

Siyin Wang, Chao-Han Yang, Ji Wu, Chao Zhang


Abstract
Large language models (LLMs) can adapt to new tasks through in-context learning (ICL) based on a few examples presented in dialogue history without any model parameter update. Despite such convenience, the performance of ICL heavily depends on the quality of the in-context examples presented, which makes the in-context example selection approach a critical choice. This paper proposes a novel eBayesian in-Context example Selection method (ByCS) for ICL. Extending the inference probability conditioned on in-context examples based on Bayes’ theorem, ByCS focuses on the inverse inference conditioned on test input. Following the assumption that accurate inverse inference probability (likelihood) will result in accurate inference probability (posterior), in-context examples are selected based on their inverse inference results. Diverse and extensive cross-tasking and cross-modality experiments are performed with speech, text, and image examples. Experimental results show the efficacy and robustness of our ByCS method on various models, tasks and modalities.
Anthology ID:
2024.emnlp-main.1158
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20812–20828
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1158
DOI:
Bibkey:
Cite (ACL):
Siyin Wang, Chao-Han Yang, Ji Wu, and Chao Zhang. 2024. Bayesian Example Selection Improves In-Context Learning for Speech, Text and Visual Modalities. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 20812–20828, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Bayesian Example Selection Improves In-Context Learning for Speech, Text and Visual Modalities (Wang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1158.pdf