MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning

Wanqing Cui, Keping Bi, Jiafeng Guo, Xueqi Cheng


Abstract
Since commonsense information has been recorded significantly less frequently than its existence, language models pre-trained by text generation have difficulty to learn sufficient commonsense knowledge. Several studies have leveraged text retrieval to augment the models’ commonsense ability. Unlike text, images capture commonsense information inherently but little effort has been paid to effectively utilize them. In this work, we propose a novel Multi-mOdal REtrieval (MORE) augmentation framework, to leverage both text and images to enhance the commonsense ability of language models. Extensive experiments on the Common-Gen task have demonstrated the efficacy of MORE based on the pre-trained models of both single and multiple modalities.
Anthology ID:
2024.findings-acl.69
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1178–1192
Language:
URL:
https://aclanthology.org/2024.findings-acl.69
DOI:
Bibkey:
Cite (ACL):
Wanqing Cui, Keping Bi, Jiafeng Guo, and Xueqi Cheng. 2024. MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning. In Findings of the Association for Computational Linguistics ACL 2024, pages 1178–1192, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning (Cui et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.69.pdf