Limitations of Cross-Lingual Learning from Image Search

Mareike Hartmann, Anders Søgaard


Abstract
Cross-lingual representation learning is an important step in making NLP scale to all the world’s languages. Previous work on bilingual lexicon induction suggests that it is possible to learn cross-lingual representations of words based on similarities between images associated with these words. However, that work focused (almost exclusively) on the translation of nouns only. Here, we investigate whether the meaning of other parts-of-speech (POS), in particular adjectives and verbs, can be learned in the same way. Our experiments across five language pairs indicate that previous work does not scale to the problem of learning cross-lingual representations beyond simple nouns.
Anthology ID:
W18-3021
Volume:
Proceedings of the Third Workshop on Representation Learning for NLP
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Isabelle Augenstein, Kris Cao, He He, Felix Hill, Spandana Gella, Jamie Kiros, Hongyuan Mei, Dipendra Misra
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
159–163
Language:
URL:
https://aclanthology.org/W18-3021
DOI:
10.18653/v1/W18-3021
Bibkey:
Cite (ACL):
Mareike Hartmann and Anders Søgaard. 2018. Limitations of Cross-Lingual Learning from Image Search. In Proceedings of the Third Workshop on Representation Learning for NLP, pages 159–163, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Limitations of Cross-Lingual Learning from Image Search (Hartmann & Søgaard, RepL4NLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-3021.pdf