Amin Ahmad
2023
mAggretriever: A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval
Sheng-Chieh Lin
|
Amin Ahmad
|
Jimmy Lin
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Multilingual information retrieval (MLIR) is a crucial yet challenging task due to the need for human annotations in multiple languages, making training data creation labor-intensive. In this paper, we introduce mAggretriever, which effectively leverages semantic and lexical features from pre-trained multilingual transformers (e.g., mBERT and XLM-R) for dense retrieval. To enhance training and inference efficiency, we employ approximate masked-language modeling prediction for computing lexical features, reducing 70–85% GPU memory requirement for mAggretriever fine-tuning. Empirical results demonstrate that mAggretriever, fine-tuned solely on English training data, surpasses existing state-of-the-art multilingual dense retrieval models that undergo further training on large-scale MLIR training data. Our code is available at url.
2020
Multilingual Universal Sentence Encoder for Semantic Retrieval
Yinfei Yang
|
Daniel Cer
|
Amin Ahmad
|
Mandy Guo
|
Jax Law
|
Noah Constant
|
Gustavo Hernandez Abrego
|
Steve Yuan
|
Chris Tar
|
Yun-hsuan Sung
|
Brian Strope
|
Ray Kurzweil
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
We present easy-to-use retrieval focused multilingual sentence embedding models, made available on TensorFlow Hub. The models embed text from 16 languages into a shared semantic space using a multi-task trained dual-encoder that learns tied cross-lingual representations via translation bridge tasks (Chidambaram et al., 2018). The models achieve a new state-of-the-art in performance on monolingual and cross-lingual semantic retrieval (SR). Competitive performance is obtained on the related tasks of translation pair bitext retrieval (BR) and retrieval question answering (ReQA). On transfer learning tasks, our multilingual embeddings approach, and in some cases exceed, the performance of English only sentence embeddings.
2019
ReQA: An Evaluation for End-to-End Answer Retrieval Models
Amin Ahmad
|
Noah Constant
|
Yinfei Yang
|
Daniel Cer
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
Popular QA benchmarks like SQuAD have driven progress on the task of identifying answer spans within a specific passage, with models now surpassing human performance. However, retrieving relevant answers from a huge corpus of documents is still a challenging problem, and places different requirements on the model architecture. There is growing interest in developing scalable answer retrieval models trained end-to-end, bypassing the typical document retrieval step. In this paper, we introduce Retrieval Question-Answering (ReQA), a benchmark for evaluating large-scale sentence-level answer retrieval models. We establish baselines using both neural encoding models as well as classical information retrieval techniques. We release our evaluation code to encourage further work on this challenging task.
Search
Co-authors
- Yinfei Yang 2
- Daniel Cer 2
- Noah Constant 2
- Sheng-Chieh Lin 1
- Jimmy Lin 1
- show all...