TRAM: Benchmarking Temporal Reasoning for Large Language Models

Yuqing Wang, Yun Zhao


Abstract
Reasoning about time is essential for understanding the nuances of events described in natural language. Previous research on this topic has been limited in scope, characterized by a lack of standardized benchmarks that would allow for consistent evaluations across different studies. In this paper, we introduce TRAM, a temporal reasoning benchmark composed of ten datasets, encompassing various temporal aspects of events such as order, arithmetic, frequency, and duration, designed to facilitate a comprehensive evaluation of the TeR capabilities of large language models (LLMs). We evaluate popular LLMs like GPT-4 and Llama2 in zero-shot and few-shot scenarios, and establish baselines with BERT-based and domain-specific models. Our findings indicate that the best-performing model lags significantly behind human performance. It is our aspiration that TRAM will spur further progress in enhancing the TeR capabilities of LLMs.
Anthology ID:
2024.findings-acl.382
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6389–6415
Language:
URL:
https://aclanthology.org/2024.findings-acl.382
DOI:
Bibkey:
Cite (ACL):
Yuqing Wang and Yun Zhao. 2024. TRAM: Benchmarking Temporal Reasoning for Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 6389–6415, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
TRAM: Benchmarking Temporal Reasoning for Large Language Models (Wang & Zhao, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.382.pdf