Spectral Graph-Based Method of Multimodal Word Embedding

Kazuki Fukui, Takamasa Oshikiri, Hidetoshi Shimodaira


Abstract
In this paper, we propose a novel method for multimodal word embedding, which exploit a generalized framework of multi-view spectral graph embedding to take into account visual appearances or scenes denoted by words in a corpus. We evaluated our method through word similarity tasks and a concept-to-image search task, having found that it provides word representations that reflect visual information, while somewhat trading-off the performance on the word similarity tasks. Moreover, we demonstrate that our method captures multimodal linguistic regularities, which enable recovering relational similarities between words and images by vector arithmetics.
Anthology ID:
W17-2405
Volume:
Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Martin Riedl, Swapna Somasundaran, Goran Glavaš, Eduard Hovy
Venue:
TextGraphs
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
39–44
Language:
URL:
https://aclanthology.org/W17-2405
DOI:
10.18653/v1/W17-2405
Bibkey:
Cite (ACL):
Kazuki Fukui, Takamasa Oshikiri, and Hidetoshi Shimodaira. 2017. Spectral Graph-Based Method of Multimodal Word Embedding. In Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing, pages 39–44, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Spectral Graph-Based Method of Multimodal Word Embedding (Fukui et al., TextGraphs 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-2405.pdf
Data
NUS-WIDEVisual Question Answering