Muhammad Fareed Zaffar
2025
Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs
Essa Jan
|
Moiz Ali
|
Muhammad Saram Hassan
|
Muhammad Fareed Zaffar
|
Yasir Zaki
Findings of the Association for Computational Linguistics: EMNLP 2025
As the knowledge of large language models (LLMs) becomes outdated over time, there is a growing need for efficient methods to update them, especially when injecting proprietary information. Our study reveals that comprehension-intensive fine-tuning tasks (e.g., question answering and blanks) achieve substantially higher knowledge retention rates (48%) compared to mapping-oriented tasks like translation (17%) or text-to-JSON conversion (20%), despite exposure to identical factual content. We demonstrate that this pattern persists across model architectures and follows scaling laws, with larger models showing improved retention across all task types. However, all models exhibit significant performance drops when applying injected knowledge in broader contexts, suggesting limited semantic integration. These findings show the importance of task selection in updating LLM knowledge, showing that effective knowledge injection relies not just on data exposure but on the depth of cognitive engagement during fine-tuning.