Debiasing Multimodal Models via Causal Information Minimization

Vaidehi Patil, Adyasha Maharana, Mohit Bansal


Abstract
Most existing debiasing methods for multimodal models, including causal intervention and inference methods, utilize approximate heuristics to represent the biases, such as shallow features from early stages of training or unimodal features for multimodal tasks like VQA, etc., which may not be accurate. In this paper, we study bias arising from confounders in a causal graph for multimodal data, and examine a novel approach that leverages causally-motivated information minimization to learn the confounder representations. Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data. Hence, minimizing the information content of features obtained from a pretrained biased model helps learn the simplest predictive features that capture the underlying data distribution. We treat these features as confounder representations and use them via methods motivated by causal theory to remove bias from models. We find that the learned confounder representations indeed capture dataset biases and the proposed debiasing methods improve out-of-distribution (OOD) performance on multiple multimodal datasets without sacrificing in-distribution performance. Additionally, we introduce a novel metric to quantify the sufficiency of spurious features in models’ predictions that further demonstrates the effectiveness of our proposed methods.
Anthology ID:
2023.findings-emnlp.270
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4108–4123
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.270
DOI:
10.18653/v1/2023.findings-emnlp.270
Bibkey:
Cite (ACL):
Vaidehi Patil, Adyasha Maharana, and Mohit Bansal. 2023. Debiasing Multimodal Models via Causal Information Minimization. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4108–4123, Singapore. Association for Computational Linguistics.
Cite (Informal):
Debiasing Multimodal Models via Causal Information Minimization (Patil et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.270.pdf