ArT: All-round Thinker for Unsupervised Commonsense Question Answering

Jiawei Wang, Hai Zhao


Abstract
Without labeled question-answer pairs for necessary training, unsupervised commonsense question-answering (QA) appears to be extremely challenging due to its indispensable unique prerequisite on commonsense source like knowledge bases (KBs), which are usually highly resource consuming in construction. Recently pre-trained language models (PLMs) show effectiveness as an alternative for commonsense clues when they play a role of knowledge generator. However, existing work either relies on large-scale in-domain or out-of-domain labeled data, or fails to generate knowledge of high quality in a general way. Motivated by human thinking experience, we propose an approach of All-round Thinker (ArT) by fully taking association during knowledge generating. In detail, our model first focuses on key parts in the given context, and then generates highly related knowledge on such a basis in an association way like human thinking. Besides, for casual reasoning, a reverse thinking mechanism is especially added to further enhance bidirectional inferring between cause and effect. ArT is totally unsupervised and KBs-free. We evaluate it on three commonsense QA benchmarks: COPA, SocialIQA and SCT. On all scales of PLM backbones, ArT shows its brilliant performance and outperforms previous advanced unsupervised models.
Anthology ID:
2022.coling-1.128
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1490–1501
Language:
URL:
https://aclanthology.org/2022.coling-1.128
DOI:
Bibkey:
Cite (ACL):
Jiawei Wang and Hai Zhao. 2022. ArT: All-round Thinker for Unsupervised Commonsense Question Answering. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1490–1501, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
ArT: All-round Thinker for Unsupervised Commonsense Question Answering (Wang & Zhao, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.128.pdf
Code
 wangjw424/commonsenseqa-art
Data
COPAConceptNet