Locating and Extracting Relational Concepts in Large Language Models

Zijian Wang, Britney Whyte, Chang Xu


Abstract
Relational concepts are indeed foundational to the structure of knowledge representation, as they facilitate the association between various entity concepts, allowing us to express and comprehend complex world knowledge.By expressing relational concepts in natural language prompts, people can effortlessly interact with large language models (LLMs) and recall desired factual knowledge. However, the process of knowledge recall lacks interpretability, and representations of relational concepts within LLMs remain unknown to us. In this paper, we identify hidden states that can express entity and relational concepts through causal mediation analysis in fact recall processes. Our finding reveals that at the last token position of the input prompt, there are hidden states that solely express the causal effects of relational concepts. Based on this finding, we assume that these hidden states can be treated as relational representations and we can successfully extract them from LLMs. The experimental results demonstrate high credibility of the relational representations: they can be flexibly transplanted into other fact recall processes, and can also be used as robust entity connectors. Moreover, we also show that the relational representations exhibit significant potential for controllable fact recall through relation rewriting.
Anthology ID:
2024.findings-acl.287
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4818–4832
Language:
URL:
https://aclanthology.org/2024.findings-acl.287
DOI:
Bibkey:
Cite (ACL):
Zijian Wang, Britney Whyte, and Chang Xu. 2024. Locating and Extracting Relational Concepts in Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 4818–4832, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Locating and Extracting Relational Concepts in Large Language Models (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.287.pdf