Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning

Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, Ming Gao


Abstract
Large language models (LLMs) enable in-context learning (ICL) by conditioning on a few labeled training examples as a text-based prompt, eliminating the need for parameter updates and achieving competitive performance. In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets: the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. To unleash the power of LLMs in few-shot learning scenarios, we introduce a novel Knowledgeable In-Context Tuning (KICT) framework to further improve the performance of ICL:1) injecting knowledge into LLMs during continual self-supervised pre-training, 2) judiciously selecting the examples for ICL with high knowledge relevance, and 3) calibrating the prediction results based on prior knowledge.We evaluate the proposed approaches on autoregressive models (e.g., GPT-style LLMs) over multiple text classification and question-answering tasks. Experimental results demonstrate that KICT substantially outperforms strong baselines and improves by more than 13% and 7% on text classification and question-answering tasks, respectively.
Anthology ID:
2024.findings-naacl.207
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3261–3280
Language:
URL:
https://aclanthology.org/2024.findings-naacl.207
DOI:
Bibkey:
Cite (ACL):
Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, and Ming Gao. 2024. Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3261–3280, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.207.pdf
Copyright:
 2024.findings-naacl.207.copyright.pdf