Commonsense Knowledge in Word Associations and ConceptNet

Chunhua Liu, Trevor Cohn, Lea Frermann


Abstract
Humans use countless basic, shared facts about the world to efficiently navigate in their environment. This commonsense knowledge is rarely communicated explicitly, however, understanding how commonsense knowledge is represented in different paradigms is important for (a) a deeper understanding of human cognition and (b) augmenting automatic reasoning systems. This paper presents an in-depth comparison of two large-scale resources of general knowledge: ConceptNet, an engineered relational database, and SWOW, a knowledge graph derived from crowd-sourced word associations. We examine the structure, overlap and differences between the two graphs, as well as the extent of situational commonsense knowledge present in the two resources. We finally show empirically that both resources improve downstream task performance on commonsense reasoning benchmarks over text-only baselines, suggesting that large-scale word association data, which have been obtained for several languages through crowd-sourcing, can be a valuable complement to curated knowledge graphs.
Anthology ID:
2021.conll-1.38
Volume:
Proceedings of the 25th Conference on Computational Natural Language Learning
Month:
November
Year:
2021
Address:
Online
Venues:
CoNLL | EMNLP
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
481–495
Language:
URL:
https://aclanthology.org/2021.conll-1.38
DOI:
10.18653/v1/2021.conll-1.38
Bibkey:
Cite (ACL):
Chunhua Liu, Trevor Cohn, and Lea Frermann. 2021. Commonsense Knowledge in Word Associations and ConceptNet. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 481–495, Online. Association for Computational Linguistics.
Cite (Informal):
Commonsense Knowledge in Word Associations and ConceptNet (Liu et al., CoNLL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.conll-1.38.pdf
Data
ATOMICCommonsenseQAConceptNetMCScriptOpenBookQA