Revisiting the Knowledge Injection Frameworks

Peng Fu, Yiming Zhang, Haobo Wang, Weikang Qiu, Junbo Zhao


Abstract
In recent years, large language models (LLMs), such as GPTs, have attained great impact worldwide. However, how to adapt these LLMs to better suit the vertical domain-specific tasks by utilizing external knowledge remains not completely solved. Indeed, there have emerged a few works on this line where most of them rely on an alignment heuristic that is built to inject the corresponding knowledge tuple into the associated text sample. However, despite the promise, we identify a pivotal problem in this work ubiquitously. Simply put, we find that injecting unaligned (i.e., random) knowledge tuple into the LLMs achieves comparable (and sometimes better) results than the aligned knowledge being injected. We therefore take a thorough investigation of this frustrating finding on a variety of related prior work and further provide a chain of potential interpretations for the phenomenon. Based on all that, we offer a simple remediated technique. Briefly, the core of this technique roots in an ideological emphasis on the pruning and purification of the external knowledge base to be injected into LLMs. At last, we show that by integrating this technique into most (if not all) knowledge injection frameworks and recent LLMs, it manages to overcome the aforementioned sanity problem and further pushes the boundary of the performance of the domain-adaptive LLMs.
Anthology ID:
2023.emnlp-main.677
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10983–10997
Language:
URL:
https://aclanthology.org/2023.emnlp-main.677
DOI:
10.18653/v1/2023.emnlp-main.677
Bibkey:
Cite (ACL):
Peng Fu, Yiming Zhang, Haobo Wang, Weikang Qiu, and Junbo Zhao. 2023. Revisiting the Knowledge Injection Frameworks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10983–10997, Singapore. Association for Computational Linguistics.
Cite (Informal):
Revisiting the Knowledge Injection Frameworks (Fu et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.677.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.677.mp4