Multilingual and Multimodal Topic Modelling with Pretrained Embeddings

Elaine Zosa, Lidia Pivovarova


Abstract
This paper presents M3L-Contrast—a novel multimodal multilingual (M3L) neural topic model for comparable data that maps texts from multiple languages and images into a shared topic space. Our model is trained jointly on texts and images and takes advantage of pretrained document and image embeddings to abstract the complexities between different languages and modalities. As a multilingual topic model, it produces aligned language-specific topics and as multimodal model, it infers textual representations of semantic concepts in images. We demonstrate that our model is competitive with a zero-shot topic model in predicting topic distributions for comparable multilingual data and significantly outperforms a zero-shot model in predicting topic distributions for comparable texts and images. We also show that our model performs almost as well on unaligned embeddings as it does on aligned embeddings.
Anthology ID:
2022.coling-1.355
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4037–4048
Language:
URL:
https://aclanthology.org/2022.coling-1.355
DOI:
Bibkey:
Cite (ACL):
Elaine Zosa and Lidia Pivovarova. 2022. Multilingual and Multimodal Topic Modelling with Pretrained Embeddings. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4037–4048, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Multilingual and Multimodal Topic Modelling with Pretrained Embeddings (Zosa & Pivovarova, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.355.pdf
Code
 ezosa/m3l-topic-model
Data
WIT