Temporal Common Sense Acquisition with Minimal Supervision

Ben Zhou, Qiang Ning, Daniel Khashabi, Dan Roth


Abstract
Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TacoLM, a temporal common sense language model. Our method is shown to give quality predictions of various dimensions of temporal common sense (on UDST and a newly collected dataset from RealNews). It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA (on TimeBank, HiEVE and MCTACO) that are better than using the standard BERT. Thus, it will be an important component of temporal NLP.
Anthology ID:
2020.acl-main.678
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7579–7589
Language:
URL:
https://aclanthology.org/2020.acl-main.678
DOI:
10.18653/v1/2020.acl-main.678
Bibkey:
Cite (ACL):
Ben Zhou, Qiang Ning, Daniel Khashabi, and Dan Roth. 2020. Temporal Common Sense Acquisition with Minimal Supervision. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7579–7589, Online. Association for Computational Linguistics.
Cite (Informal):
Temporal Common Sense Acquisition with Minimal Supervision (Zhou et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.678.pdf
Video:
 http://slideslive.com/38929393
Data
RealNews