MissModal: Increasing Robustness to Missing Modality in Multimodal Sentiment Analysis

Ronghao Lin, Haifeng Hu


Abstract
When applying multimodal machine learning in downstream inference, both joint and coordinated multimodal representations rely on the complete presence of modalities as in training. However, modal-incomplete data, where certain modalities are missing, greatly reduces performance in Multimodal Sentiment Analysis (MSA) due to varying input forms and semantic information deficiencies. This limits the applicability of the predominant MSA methods in the real world, where the completeness of multimodal data is uncertain and variable. The generation-based methods attempt to generate the missing modality, yet they require complex hierarchical architecture with huge computational costs and struggle with the representation gaps across different modalities. Diversely, we propose a novel representation learning approach named MissModal, devoting to increasing robustness to missing modality in a classification approach. Specifically, we adopt constraints with geometric contrastive loss, distribution distance loss, and sentiment semantic loss to align the representations of modal-missing and modal-complete data, without impacting the sentiment inference for the complete modalities. Furthermore, we do not demand any changes in the multimodal fusion stage, highlighting the generality of our method in other multimodal learning systems. Extensive experiments demonstrate that the proposed method achieves superior performance with minimal computational costs in various missing modalities scenarios (flexibility), including severely missing modality (efficiency) on two public MSA datasets.
Anthology ID:
2023.tacl-1.94
Volume:
Transactions of the Association for Computational Linguistics, Volume 11
Month:
Year:
2023
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1686–1702
Language:
URL:
https://aclanthology.org/2023.tacl-1.94
DOI:
10.1162/tacl_a_00628
Bibkey:
Cite (ACL):
Ronghao Lin and Haifeng Hu. 2023. MissModal: Increasing Robustness to Missing Modality in Multimodal Sentiment Analysis. Transactions of the Association for Computational Linguistics, 11:1686–1702.
Cite (Informal):
MissModal: Increasing Robustness to Missing Modality in Multimodal Sentiment Analysis (Lin & Hu, TACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.tacl-1.94.pdf