Large Language Models are Limited in Out-of-Context Knowledge Reasoning

Peng Hu, Changjiang Gao, Ruiqi Gao, Jiajun Chen, Shujian Huang


Abstract
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning. However, previous work challenges their out-of-context reasoning ability, i.e., the ability to infer information from their training data, instead of from the context or prompt. This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge. We designed a synthetic dataset with seven representative OCKR tasks to systematically assess the OCKR capabilities of LLMs. Using this dataset, we evaluated several LLMs and discovered that their proficiency in this aspect is limited, regardless of whether the knowledge is trained in a separate or adjacent training settings. Moreover, training the model to reason with reasoning examples does not result in significant improvement, while training the model to perform explicit knowledge retrieval helps for retrieving attribute knowledge but not the relation knowledge, indicating that the model’s limited OCKR capabilities are due to difficulties in knowledge retrieval. Furthermore, we treat cross-lingual knowledge transfer as a distinct form of OCKR, and evaluate this ability. Our results show that the evaluated model also exhibits limited ability in transferring knowledge across languages.
Anthology ID:
2024.findings-emnlp.178
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3144–3155
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.178
DOI:
Bibkey:
Cite (ACL):
Peng Hu, Changjiang Gao, Ruiqi Gao, Jiajun Chen, and Shujian Huang. 2024. Large Language Models are Limited in Out-of-Context Knowledge Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3144–3155, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Large Language Models are Limited in Out-of-Context Knowledge Reasoning (Hu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.178.pdf
Data:
 2024.findings-emnlp.178.data.zip