2020
pdf
bib
abs
Multilingual Universal Sentence Encoder for Semantic Retrieval
Yinfei Yang
|
Daniel Cer
|
Amin Ahmad
|
Mandy Guo
|
Jax Law
|
Noah Constant
|
Gustavo Hernandez Abrego
|
Steve Yuan
|
Chris Tar
|
Yun-hsuan Sung
|
Brian Strope
|
Ray Kurzweil
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
We present easy-to-use retrieval focused multilingual sentence embedding models, made available on TensorFlow Hub. The models embed text from 16 languages into a shared semantic space using a multi-task trained dual-encoder that learns tied cross-lingual representations via translation bridge tasks (Chidambaram et al., 2018). The models achieve a new state-of-the-art in performance on monolingual and cross-lingual semantic retrieval (SR). Competitive performance is obtained on the related tasks of translation pair bitext retrieval (BR) and retrieval question answering (ReQA). On transfer learning tasks, our multilingual embeddings approach, and in some cases exceed, the performance of English only sentence embeddings.
2019
pdf
bib
abs
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model
Muthu Chidambaram
|
Yinfei Yang
|
Daniel Cer
|
Steve Yuan
|
Yunhsuan Sung
|
Brian Strope
|
Ray Kurzweil
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
The scarcity of labeled training data across many languages is a significant roadblock for multilingual neural language processing. We approach the lack of in-language training data using sentence embeddings that map text written in different languages, but with similar meanings, to nearby embedding space representations. The representations are produced using a dual-encoder based model trained to maximize the representational similarity between sentence pairs drawn from parallel data. The representations are enhanced using multitask training and unsupervised monolingual corpora. The effectiveness of our multilingual sentence embeddings are assessed on a comprehensive collection of monolingual, cross-lingual, and zero-shot/few-shot learning tasks.
2018
pdf
bib
abs
Learning Semantic Textual Similarity from Conversations
Yinfei Yang
|
Steve Yuan
|
Daniel Cer
|
Sheng-yi Kong
|
Noah Constant
|
Petr Pilar
|
Heming Ge
|
Yun-Hsuan Sung
|
Brian Strope
|
Ray Kurzweil
Proceedings of the Third Workshop on Representation Learning for NLP
We present a novel approach to learn representations for sentence-level semantic similarity using conversational data. Our method trains an unsupervised model to predict conversational responses. The resulting sentence embeddings perform well on the Semantic Textual Similarity (STS) Benchmark and SemEval 2017’s Community Question Answering (CQA) question similarity subtask. Performance is further improved by introducing multitask training, combining conversational response prediction and natural language inference. Extensive experiments show the proposed model achieves the best performance among all neural models on the STS Benchmark and is competitive with the state-of-the-art feature engineered and mixed systems for both tasks.
pdf
bib
abs
Universal Sentence Encoder for English
Daniel Cer
|
Yinfei Yang
|
Sheng-yi Kong
|
Nan Hua
|
Nicole Limtiaco
|
Rhomni St. John
|
Noah Constant
|
Mario Guajardo-Cespedes
|
Steve Yuan
|
Chris Tar
|
Brian Strope
|
Ray Kurzweil
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.