Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness

Zichao Li, Ines Arous, Siva Reddy, Jackie Cheung


Abstract
The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest. To maintain the knowledge acquired by LLMs, we need to ensure that the editing of learned facts respects internal logical constraints, which are known as dependency of knowledge. Existing work on editing LLMs has partially addressed the issue of dependency, when the editing of a fact should apply to its lexical variations without disrupting irrelevant ones. However, they neglect the dependency between a fact and its logical implications. We propose an evaluation protocol with an accompanying question-answering dataset, StandUp, that provides a comprehensive assessment of the editing process considering the above notions of dependency. Our protocol involves setting up a controlled environment in which we edit facts and monitor their impact on LLMs, along with their implications based on If-Then rules. Extensive experiments on StandUp show that existing knowledge editing methods are sensitive to the surface form of knowledge, and that they have limited performance in inferring the implications of edited facts.
Anthology ID:
2023.findings-emnlp.511
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7623–7636
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.511
DOI:
10.18653/v1/2023.findings-emnlp.511
Bibkey:
Cite (ACL):
Zichao Li, Ines Arous, Siva Reddy, and Jackie Cheung. 2023. Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7623–7636, Singapore. Association for Computational Linguistics.
Cite (Informal):
Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness (Li et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.511.pdf