Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact Verification

Anni Zou, Zhuosheng Zhang, Hai Zhao


Abstract
Commonsense fact verification, as a challenging branch of commonsense question-answering (QA), aims to verify through facts whether a given commonsense claim is correct or not. Answering commonsense questions necessitates a combination of knowledge from various levels. However, existing studies primarily rest on grasping either unstructured evidence or potential reasoning paths from structured knowledge bases, yet failing to exploit the benefits of heterogeneous knowledge simultaneously. In light of this, we propose Decker, a commonsense fact verification model that is capable of bridging heterogeneous knowledge by uncovering latent relationships between structured and unstructured knowledge. Experimental results on two commonsense fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the effectiveness of our Decker and further analysis verifies its capability to seize more precious information through reasoning. The official implementation of Decker is available at https://github.com/Anni-Zou/Decker.
Anthology ID:
2023.findings-acl.752
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11891–11904
Language:
URL:
https://aclanthology.org/2023.findings-acl.752
DOI:
10.18653/v1/2023.findings-acl.752
Bibkey:
Cite (ACL):
Anni Zou, Zhuosheng Zhang, and Hai Zhao. 2023. Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact Verification. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11891–11904, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact Verification (Zou et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.752.pdf