CosMo: Conditional Seq2Seq-based Mixture Model for Zero-Shot Commonsense Question Answering

Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Mahsa Baktashmotlagh, Gholamreza Haffari


Abstract
Commonsense reasoning refers to the ability of evaluating a social situation and acting accordingly. Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform commonsense reasoning. The dynamic world of social interactions requires context-dependent on-demand systems to infer such underlying information. However, current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation, mostly due to incapability of identifying a diverse range of implicit social relations. Hence they fail to estimate the correct reasoning path. In this paper, we present Conditional Seq2Seq-based Mixture model (CosMo), which provides us with the capabilities of dynamic and diverse content generation. We use CosMo to generate context-dependent clauses, which form a dynamic Knowledge Graph (KG) on-the-fly for commonsense reasoning. To show the adaptability of our model to context-dependant knowledge generation, we address the task of zero-shot commonsense question answering. The empirical results indicate an improvement of up to +5.2% over the state-of-the-art models.
Anthology ID:
2020.coling-main.467
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5347–5359
Language:
URL:
https://aclanthology.org/2020.coling-main.467
DOI:
10.18653/v1/2020.coling-main.467
Bibkey:
Cite (ACL):
Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Mahsa Baktashmotlagh, and Gholamreza Haffari. 2020. CosMo: Conditional Seq2Seq-based Mixture Model for Zero-Shot Commonsense Question Answering. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5347–5359, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
CosMo: Conditional Seq2Seq-based Mixture Model for Zero-Shot Commonsense Question Answering (Moghimifar et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.467.pdf
Code
 farhadmfar/cosmo
Data
ATOMICConceptNet