KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense Generation

Yiran Xing, Zai Shi, Zhao Meng, Gerhard Lakemeyer, Yunpu Ma, Roger Wattenhofer


Abstract
We present Knowledge Enhanced Multimodal BART (KM-BART), which is a Transformer-based sequence-to-sequence model capable of reasoning about commonsense knowledge from multimodal inputs of images and texts. We adapt the generative BART architecture (Lewis et al., 2020) to a multimodal model with visual and textual inputs. We further develop novel pretraining tasks to improve the model performance on the Visual Commonsense Generation (VCG) task. In particular, our pretraining task of Knowledge-based Commonsense Generation (KCG) boosts model performance on the VCG task by leveraging commonsense knowledge from a large language model pretrained on external commonsense knowledge graphs. To the best of our knowledge, we are the first to propose a dedicated task for improving model performance on the VCG task. Experimental results show that our model reaches state-of-the-art performance on the VCG task (Park et al., 2020) by applying these novel pretraining tasks.
Anthology ID:
2021.acl-long.44
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
525–535
Language:
URL:
https://aclanthology.org/2021.acl-long.44
DOI:
10.18653/v1/2021.acl-long.44
Bibkey:
Cite (ACL):
Yiran Xing, Zai Shi, Zhao Meng, Gerhard Lakemeyer, Yunpu Ma, and Roger Wattenhofer. 2021. KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 525–535, Online. Association for Computational Linguistics.
Cite (Informal):
KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense Generation (Xing et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.acl-long.44.pdf
Optional supplementary material:
 2021.acl-long.44.OptionalSupplementaryMaterial.pdf
Video:
 https://aclanthology.org/2021.acl-long.44.mp4
Code
 FomalhautB/KM-BART-ACL
Data
ConceptNetConceptual CaptionsMS COCOVisual GenomeVisual Question Answering