Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset

Tianqing Fang, Weiqi Wang, Sehyun Choi, Shibo Hao, Hongming Zhang, Yangqiu Song, Bin He


Abstract
Reasoning over commonsense knowledge bases (CSKB) whose elements are in the form of free-text is an important yet hard task in NLP. While CSKB completion only fills the missing links within the domain of the CSKB, CSKB population is alternatively proposed with the goal of reasoning unseen assertions from external resources. In this task, CSKBs are grounded to a large-scale eventuality (activity, state, and event) graph to discriminate whether novel triples from the eventuality graph are plausible or not. However, existing evaluations on the population task are either not accurate (automatic evaluation with randomly sampled negative examples) or of small scale (human annotation). In this paper, we benchmark the CSKB population task with a new large-scale dataset by first aligning four popular CSKBs, and then presenting a high-quality human-annotated evaluation set to probe neural models’ commonsense reasoning ability. We also propose a novel inductive commonsense reasoning model that reasons over graphs. Experimental results show that generalizing commonsense reasoning on unseen assertions is inherently a hard task. Models achieving high accuracy during training perform poorly on the evaluation set, with a large gap between human performance. We will make the data publicly available for future contributions. Codes and data are available at https://github.com/HKUST-KnowComp/CSKB-Population.
Anthology ID:
2021.emnlp-main.705
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8949–8964
Language:
URL:
https://aclanthology.org/2021.emnlp-main.705
DOI:
10.18653/v1/2021.emnlp-main.705
Bibkey:
Cite (ACL):
Tianqing Fang, Weiqi Wang, Sehyun Choi, Shibo Hao, Hongming Zhang, Yangqiu Song, and Bin He. 2021. Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8949–8964, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset (Fang et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.705.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.705.mp4
Code
 hkust-knowcomp/cskb-population +  additional community code
Data
ConceptNetGLUCOSESocial Chemistry 101