Demonstration Augmentation for Zero-shot In-context Learning

Yi Su, Yunpeng Tai, Yixin Ji, Juntao Li, Yan Bowen, Min Zhang


Abstract
Large Language Models (LLMs) have demonstrated an impressive capability known as In-context Learning (ICL), which enables them to acquire knowledge from textual demonstrations without the need for parameter updates.However, many studies have highlighted that the model’s performance is sensitive to the choice of demonstrations, presenting a significant challenge for practical applications where we lack prior knowledge of user queries.Consequently, we need to construct an extensive demonstration pool and incorporate external databases to assist the model, leading to considerable time and financial costs.In light of this, some recent research has shifted focus towards zero-shot ICL, aiming to reduce the model’s reliance on external information by leveraging their inherent generative capabilities. Despite the effectiveness of these approaches, the content generated by the model may be unreliable, and the generation process is time-consuming.To address these issues, we propose Demonstration Augmentation for In-context Learning (DAIL), which employs the model’s previously predicted historical samples as demonstrations for subsequent ones.DAIL brings no additional inference cost and does not rely on the model’s generative capabilities.Our experiments reveal that DAIL can significantly improve the model’s performance over direct zero-shot inference and can even outperform few-shot ICL without any external information.
Anthology ID:
2024.findings-acl.846
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14232–14244
Language:
URL:
https://aclanthology.org/2024.findings-acl.846
DOI:
Bibkey:
Cite (ACL):
Yi Su, Yunpeng Tai, Yixin Ji, Juntao Li, Yan Bowen, and Min Zhang. 2024. Demonstration Augmentation for Zero-shot In-context Learning. In Findings of the Association for Computational Linguistics ACL 2024, pages 14232–14244, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Demonstration Augmentation for Zero-shot In-context Learning (Su et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.846.pdf