Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives

Xinliang Frederick Zhang, Nicholas Beauchamp, Lu Wang


Abstract
Reasoning about time and temporal relations is an integral aspect of human cognition, essential for perceiving the world and navigating our experiences. Though large language models (LLMs) have demonstrated impressive performance in many reasoning tasks, temporal reasoning remains challenging due to its intrinsic complexity. In this work, we first study an essential task of temporal reasoning—temporal graph generation, to unveil LLMs’ inherent, global reasoning capabilities. We show that this task presents great challenges even for the most powerful LLMs, such as GPT-3.5/4. We also notice a significant performance gap by small models (< 10B) that lag behind LLMs by 50%. Next, we study how to close this gap with a budget constraint, e.g., not using model finetuning. We propose a new prompting technique tailored for temporal reasoning, Narrative-of-Thought (NoT), that first converts the events set to a Python class, then prompts a small model to generate a temporally grounded narrative, guiding the final generation of a temporal graph. Extensive experiments showcase the efficacy of NoT in improving various metrics. Notably, NoT attains the highest F1 on the Schema-11 evaluation set, while securing an overall F1 on par with GPT-3.5. NoT also achieves the best structural similarity across the board, even compared with GPT-3.5/4.
Anthology ID:
2024.findings-emnlp.963
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16507–16530
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.963
DOI:
Bibkey:
Cite (ACL):
Xinliang Frederick Zhang, Nicholas Beauchamp, and Lu Wang. 2024. Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 16507–16530, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives (Zhang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.963.pdf