Chen Qu
2024
History-Aware Conversational Dense Retrieval
Fengran Mo
|
Chen Qu
|
Kelong Mao
|
Tianyu Zhu
|
Zhan Su
|
Kaiyu Huang
|
Jian-Yun Nie
Findings of the Association for Computational Linguistics: ACL 2024
Conversational search facilitates complex information retrieval by enabling multi-turn interactions between users and the system. Supporting such interactions requires a comprehensive understanding of the conversational inputs to formulate a good search query based on historical information. In particular, the search query should include the relevant information from the previous conversation turns.However, current approaches for conversational dense retrieval primarily rely on fine-tuning a pre-trained ad-hoc retriever using the whole conversational search session, which can be lengthy and noisy. Moreover, existing approaches are limited by the amount of manual supervision signals in the existing datasets.To address the aforementioned issues, we propose a **H**istory-**A**ware **Conv**ersational **D**ense **R**etrieval (HAConvDR) system, which incorporates two ideas: context-denoised query reformulation and automatic mining of supervision signals based on the actual impact of historical turns.Experiments on two public conversational search datasets demonstrate the improved history modeling capability of HAConvDR, in particular for long conversations with topic shifts.
2022
Exploring Dual Encoder Architectures for Question Answering
Zhe Dong
|
Jianmo Ni
|
Dan Bikel
|
Enrique Alfonseca
|
Yuan Wang
|
Chen Qu
|
Imed Zitouni
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Dual encoders have been used for question-answering (QA) and information retrieval (IR) tasks with good results. There are two major types of dual encoders, Siamese Dual Encoders (SDE), with parameters shared across two encoders, and Asymmetric Dual Encoder (ADE), with two distinctly parameterized encoders. In this work, we explore the dual encoder architectures for QA retrieval tasks. By evaluating on MS MARCO, open domain NQ, and the MultiReQA benchmarks, we show that SDE performs significantly better than ADE. We further propose three different improved versions of ADEs. Based on the evaluation of QA retrieval tasks and direct analysis of the embeddings, we demonstrate that sharing parameters in projection layers would enable ADEs to perform competitively with SDEs.
Large Dual Encoders Are Generalizable Retrievers
Jianmo Ni
|
Chen Qu
|
Jing Lu
|
Zhuyun Dai
|
Gustavo Hernandez Abrego
|
Ji Ma
|
Vincent Zhao
|
Yi Luan
|
Keith Hall
|
Ming-Wei Chang
|
Yinfei Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
It has been shown that dual encoders trained on one domain often fail to generalize to other domains for retrieval tasks. One widespread belief is that the bottleneck layer of a dual encoder, where the final score is simply a dot-product between a query vector and a passage vector, is too limited compared to models with fine-grained interactions between the query and the passage. In this paper, we challenge this belief by scaling up the size of the dual encoder model while keeping the bottleneck layer as a single dot-product with a fixed size. With multi-stage training, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization. We further analyze the impact of the bottleneck layer and demonstrate diminishing improvement when scaling up the embedding size. Experimental results show that our dual encoders, Generalizable T5-based dense Retrievers (GTR), outperform previous sparse and dense retrievers on the BEIR dataset significantly. Most surprisingly, our ablation study finds that GTR is very data efficient, as it only needs 10% of MS Marco supervised data to match the out-of-domain performance of using all supervised data.
Search
Co-authors
- Jianmo Ni 2
- Fengran Mo 1
- Kelong Mao 1
- Tianyu Zhu 1
- Zhan Su 1
- show all...