%0 Journal Article %T Word Representation Learning in Multimodal Pre-Trained Transformers: An Intrinsic Evaluation %A Pezzelle, Sandro %A Takmaz, Ece %A Fernández, Raquel %J Transactions of the Association for Computational Linguistics %D 2021 %V 9 %I MIT Press %C Cambridge, MA %F pezzelle-etal-2021-word %X This study carries out a systematic intrinsic evaluation of the semantic representations learned by state-of-the-art pre-trained multimodal Transformers. These representations are claimed to be task-agnostic and shown to help on many downstream language-and-vision tasks. However, the extent to which they align with human semantic intuitions remains unclear. We experiment with various models and obtain static word representations from the contextualized ones they learn. We then evaluate them against the semantic judgments provided by human speakers. In line with previous evidence, we observe a generalized advantage of multimodal representations over language- only ones on concrete word pairs, but not on abstract ones. On the one hand, this confirms the effectiveness of these models to align language and vision, which results in better semantic representations for concepts that are grounded in images. On the other hand, models are shown to follow different representation learning patterns, which sheds some light on how and when they perform multimodal integration. %R 10.1162/tacl_a_00443 %U https://aclanthology.org/2021.tacl-1.93 %U https://doi.org/10.1162/tacl_a_00443 %P 1563-1579