Time Machine GPT

Felix Drinkall, Eghbal Rahimikia, Janet Pierrehumbert, Stefan Zohren


Abstract
Large language models (LLMs) are often trained on extensive, temporally indiscriminate text corpora, reflecting the lack of datasets with temporal metadata. This approach is not aligned with the evolving nature of language. Conventional methods for creating temporally adapted language models often depend on further pre-training static models on time-specific data. This paper presents a new approach: a series of point-in-time LLMs called TimeMachineGPT (TiMaGPT), specifically designed to be nonprognosticative. This ensures they remain uninformed about future factual information and linguistic changes. This strategy is beneficial for understanding language evolution and is of critical importance when applying models in dynamic contexts, such as time-series forecasting, where foresight of future information can prove problematic. We provide access to both the models and training datasets.
Anthology ID:
2024.findings-naacl.208
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3281–3292
Language:
URL:
https://aclanthology.org/2024.findings-naacl.208
DOI:
Bibkey:
Cite (ACL):
Felix Drinkall, Eghbal Rahimikia, Janet Pierrehumbert, and Stefan Zohren. 2024. Time Machine GPT. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3281–3292, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Time Machine GPT (Drinkall et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.208.pdf
Copyright:
 2024.findings-naacl.208.copyright.pdf