Adapting LLM Predictions in In-Context Learning with Data Priors

Javier Chiyah-Garcia, Prasoon Goyal, Michael Johnston, Reza Ghanadan


Abstract
In-Context Learning (ICL) has enabled Large Language Models (LLMs) to excel as general-purpose models in zero and few-shot task settings. However, since LLMs are often not trained on the downstream tasks, they lack crucial contextual knowledge from the data distributions, which limits their task adaptability.This paper explores using data priors to automatically customize prompts in ICL. We extract these priors in a dataset-agnostic way basedon historical information, enabling LLMs to personalize their output towards users or tasks at inference time. We find that they improve LLM’s output by injecting latent dataset-specific information for the task of rating prediction. Throughout a series of experiments, we show replicable results across LLMs and datasets on what information and methods are most effective for adapting ICL outputs with priors. Our findings offer a systematic approach to customizing prompts with additional information in a privacy-friendly manner, requiring only aggregated data that is computationally efficient.
Anthology ID:
2024.customnlp4u-1.23
Volume:
Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U)
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Sachin Kumar, Vidhisha Balachandran, Chan Young Park, Weijia Shi, Shirley Anugrah Hayati, Yulia Tsvetkov, Noah Smith, Hannaneh Hajishirzi, Dongyeop Kang, David Jurgens
Venue:
CustomNLP4U
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
305–316
Language:
URL:
https://aclanthology.org/2024.customnlp4u-1.23
DOI:
10.18653/v1/2024.customnlp4u-1.23
Bibkey:
Cite (ACL):
Javier Chiyah-Garcia, Prasoon Goyal, Michael Johnston, and Reza Ghanadan. 2024. Adapting LLM Predictions in In-Context Learning with Data Priors. In Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U), pages 305–316, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Adapting LLM Predictions in In-Context Learning with Data Priors (Chiyah-Garcia et al., CustomNLP4U 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.customnlp4u-1.23.pdf