PseudoReasoner: Leveraging Pseudo Labels for Commonsense Knowledge Base Population

Tianqing Fang, Quyet V. Do, Hongming Zhang, Yangqiu Song, Ginny Y. Wong, Simon See


Abstract
Commonsense Knowledge Base (CSKB) Population aims at reasoning over unseen entities and assertions on CSKBs, and is an important yet hard commonsense reasoning task. One challenge is that it requires out-of-domain generalization ability as the source CSKB for training is of a relatively smaller scale (1M) while the whole candidate space for population is way larger (200M). We propose PseudoReasoner, a semi-supervised learning framework for CSKB population that uses a teacher model pre-trained on CSKBs to provide pseudo labels on the unlabeled candidate dataset for a student model to learn from. The teacher can be a generative model rather than restricted to discriminative models as previous works. In addition, we design a new filtering procedure for pseudo labels based on influence function and the student model’s prediction to further improve the performance. The framework can improve the backbone model KG-BERT (RoBERTa-large) by 3.3 points on the overall performance and especially, 5.3 points on the out-of-domain performance, and achieves the state-of-the-art. The codes will be made public on acceptance. Codes and data are available at https://github.com/HKUST-KnowComp/PseudoReasoner.
Anthology ID:
2022.findings-emnlp.246
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3379–3394
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.246
DOI:
10.18653/v1/2022.findings-emnlp.246
Bibkey:
Cite (ACL):
Tianqing Fang, Quyet V. Do, Hongming Zhang, Yangqiu Song, Ginny Y. Wong, and Simon See. 2022. PseudoReasoner: Leveraging Pseudo Labels for Commonsense Knowledge Base Population. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3379–3394, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
PseudoReasoner: Leveraging Pseudo Labels for Commonsense Knowledge Base Population (Fang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.246.pdf