Mitigating Temporal Misalignment by Discarding Outdated Facts

Michael Zhang, Eunsol Choi


Abstract
While large language models are able to retain vast amounts of world knowledge seen during pretraining, such knowledge is prone to going out of date and is nontrivial to update. Furthermore, these models are often used under temporal misalignment, tasked with answering questions about the present, despite having only been trained on data collected in the past. To mitigate the effects of temporal misalignment, we propose fact duration prediction: the task of predicting how long a given fact will remain true. In our experiments, we demonstrate that identifying which facts are prone to rapid change can help models avoid reciting outdated information and determine which predictions require seeking out up-to-date knowledge sources. We also show how modeling fact duration improves calibration for knowledge-intensive tasks, such as open-retrieval question answering, under temporal misalignment, by discarding volatile facts.
Anthology ID:
2023.emnlp-main.879
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14213–14226
Language:
URL:
https://aclanthology.org/2023.emnlp-main.879
DOI:
10.18653/v1/2023.emnlp-main.879
Bibkey:
Cite (ACL):
Michael Zhang and Eunsol Choi. 2023. Mitigating Temporal Misalignment by Discarding Outdated Facts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14213–14226, Singapore. Association for Computational Linguistics.
Cite (Informal):
Mitigating Temporal Misalignment by Discarding Outdated Facts (Zhang & Choi, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.879.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.879.mp4