UniS-MMC: Multimodal Classification via Unimodality-supervised Multimodal Contrastive Learning

Heqing Zou, Meng Shen, Chen Chen, Yuchen Hu, Deepu Rajan, Eng Siong Chng


Abstract
Multimodal learning aims to imitate human beings to acquire complementary information from multiple modalities for various downstream tasks. However, traditional aggregation-based multimodal fusion methods ignore the inter-modality relationship, treat each modality equally, suffer sensor noise, and thus reduce multimodal learning performance. In this work, we propose a novel multimodal contrastive method to explore more reliable multimodal representations under the weak supervision of unimodal predicting. Specifically, we first capture task-related unimodal representations and the unimodal predictions from the introduced unimodal predicting task. Then the unimodal representations are aligned with the more effective one by the designed multimodal contrastive method under the supervision of the unimodal predictions. Experimental results with fused features on two image-text classification benchmarks UPMC-Food-101 and N24News show that our proposed Unimodality-Supervised MultiModal Contrastive UniS-MMC learning method outperforms current state-of-the-art multimodal methods. The detailed ablation study and analysis further demonstrate the advantage of our proposed method.
Anthology ID:
2023.findings-acl.41
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
659–672
Language:
URL:
https://aclanthology.org/2023.findings-acl.41
DOI:
10.18653/v1/2023.findings-acl.41
Bibkey:
Cite (ACL):
Heqing Zou, Meng Shen, Chen Chen, Yuchen Hu, Deepu Rajan, and Eng Siong Chng. 2023. UniS-MMC: Multimodal Classification via Unimodality-supervised Multimodal Contrastive Learning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 659–672, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
UniS-MMC: Multimodal Classification via Unimodality-supervised Multimodal Contrastive Learning (Zou et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.41.pdf