%0 Conference Proceedings %T Normalized Contrastive Learning for Text-Video Retrieval %A Park, Yookoon %A Azab, Mahmoud %A Moon, Seungwhan %A Xiong, Bo %A Metze, Florian %A Kundu, Gourab %A Ahmed, Kirmani %Y Goldberg, Yoav %Y Kozareva, Zornitsa %Y Zhang, Yue %S Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing %D 2022 %8 December %I Association for Computational Linguistics %C Abu Dhabi, United Arab Emirates %F park-etal-2022-normalized %X Cross-modal contrastive learning has led the recent advances in multimodal retrieval with its simplicity and effectiveness. In this work, however, we reveal that cross-modal contrastive learning suffers from incorrect normalization of the sum retrieval probabilities of each text or video instance. Specifically, we show that many test instances are either over- or under-represented during retrieval, significantly hurting the retrieval performance. To address this problem, we propose Normalized Contrastive Learning (NCL) which utilizes the Sinkhorn-Knopp algorithm to compute the instance-wise biases that properly normalize the sum retrieval probabilities of each instance so that every text and video instance is fairly represented during cross-modal retrieval. Empirical study shows that NCL brings consistent and significant gains in text-video retrieval on different model architectures, with new state-of-the-art multimodal retrieval metrics on the ActivityNet, MSVD, and MSR-VTT datasets without any architecture engineering. %R 10.18653/v1/2022.emnlp-main.17 %U https://aclanthology.org/2022.emnlp-main.17 %U https://doi.org/10.18653/v1/2022.emnlp-main.17 %P 248-260