Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs

Essa Jan, Moiz Ali, Muhammad Saram Hassan, Muhammad Fareed Zaffar, Yasir Zaki


Abstract
As the knowledge of large language models (LLMs) becomes outdated over time, there is a growing need for efficient methods to update them, especially when injecting proprietary information. Our study reveals that comprehension-intensive fine-tuning tasks (e.g., question answering and blanks) achieve substantially higher knowledge retention rates (48%) compared to mapping-oriented tasks like translation (17%) or text-to-JSON conversion (20%), despite exposure to identical factual content. We demonstrate that this pattern persists across model architectures and follows scaling laws, with larger models showing improved retention across all task types. However, all models exhibit significant performance drops when applying injected knowledge in broader contexts, suggesting limited semantic integration. These findings show the importance of task selection in updating LLM knowledge, showing that effective knowledge injection relies not just on data exposure but on the depth of cognitive engagement during fine-tuning.
Anthology ID:
2025.findings-emnlp.589
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11070–11077
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.589/
DOI:
Bibkey:
Cite (ACL):
Essa Jan, Moiz Ali, Muhammad Saram Hassan, Muhammad Fareed Zaffar, and Yasir Zaki. 2025. Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 11070–11077, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs (Jan et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.589.pdf
Checklist:
 2025.findings-emnlp.589.checklist.pdf