Muthu Chidambaram
2019
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model
Muthu Chidambaram
|
Yinfei Yang
|
Daniel Cer
|
Steve Yuan
|
Yunhsuan Sung
|
Brian Strope
|
Ray Kurzweil
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
The scarcity of labeled training data across many languages is a significant roadblock for multilingual neural language processing. We approach the lack of in-language training data using sentence embeddings that map text written in different languages, but with similar meanings, to nearby embedding space representations. The representations are produced using a dual-encoder based model trained to maximize the representational similarity between sentence pairs drawn from parallel data. The representations are enhanced using multitask training and unsupervised monolingual corpora. The effectiveness of our multilingual sentence embeddings are assessed on a comprehensive collection of monolingual, cross-lingual, and zero-shot/few-shot learning tasks.
Search
Co-authors
- Yinfei Yang 1
- Daniel Cer 1
- Steve Yuan 1
- Yunhsuan Sung 1
- Brian Strope 1
- show all...