Robustness of Named-Entity Replacements for In-Context Learning

Saeed Goodarzi, Nikhil Kagita, Dennis Minn, Shufan Wang, Roberto Dessi, Shubham Toshniwal, Adina Williams, Jack Lanchantin, Koustuv Sinha


Abstract
A key feature of modern large language models (LLMs) is their ability to perform in-context learning, a prompting technique where query- answer demonstrations are shown before the final query. This allows for generalization to novel distributions at inference time where the LLM can learn new rules without parameter updates. However, the choice of demonstrations and their relationship to a particular query can have a profound impact on model accuracy, raising concerns about the true in-context generalization capabilities (Zhao et al., 2021). In this work, we explore the robustness of the in-context learning paradigm by focusing on entities. In particular, we seek to understand the robustness of LLM in-context learning with respect to named entity replacements. We discover a significant variance in downstream performance based on the choice of the named entities, across three popular reasoning tasks and two popular LLMs. Specifically, model accuracy on the test sets can fluctuate between -2.7 to +8.0 points depending on the choice of named entity replacements. Our analysis exposes the sensitivity of LLM in-context learning with respect to named entities, and offers a simple recipe to improve test performance by hyper-parameter tuning the named entities for a given dataset. Code and datasets for reproducing the results are publicly available.
Anthology ID:
2023.findings-emnlp.728
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10914–10931
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.728
DOI:
10.18653/v1/2023.findings-emnlp.728
Bibkey:
Cite (ACL):
Saeed Goodarzi, Nikhil Kagita, Dennis Minn, Shufan Wang, Roberto Dessi, Shubham Toshniwal, Adina Williams, Jack Lanchantin, and Koustuv Sinha. 2023. Robustness of Named-Entity Replacements for In-Context Learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10914–10931, Singapore. Association for Computational Linguistics.
Cite (Informal):
Robustness of Named-Entity Replacements for In-Context Learning (Goodarzi et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.728.pdf