Simple Entity-Centric Questions Challenge Dense Retrievers

Open-domain question answering has exploded in popularity recently due to the success of dense retrieval models, which have surpassed sparse models using only a few supervised training examples. However, in this paper, we demonstrate current dense models are not yet the holy grail of retrieval. We first construct EntityQuestions, a set of simple, entity-rich questions based on facts from Wikidata (e.g., “Where was Arve Furset born?”), and observe that dense retrievers drastically under-perform sparse methods. We investigate this issue and uncover that dense retrievers can only generalize to common entities unless the question pattern is explicitly observed during training. We discuss two simple solutions towards addressing this critical problem. First, we demonstrate that data augmentation is unable to fix the generalization problem. Second, we argue a more robust passage encoder helps facilitate better question adaptation using specialized question encoders. We hope our work can shed light on the challenges in creating a robust, universal dense retriever that works well across different input distributions.


Introduction
Recent dense passage retrievers outperform traditional sparse retrieval methods like TF-IDF and BM25 (Robertson and Zaragoza, 2009) by a large margin on popular question answering datasets (Lee et al. 2019, Guu et al. 2020, Karpukhin et al. 2020, Xiong et al. 2021. These dense models are trained using supervised datasets and the dense passage retriever (DPR) model (Karpukhin et al., 2020)  In this work, we argue that dense retrieval models are not yet robust enough to replace sparse methods, and investigate some of the key shortcomings dense retrievers still face. We first construct EntityQuestions, an evaluation benchmark of simple, entity-centric questions like "Where was Arve Furset born?", and show dense retrieval methods generalize very poorly. As shown in Table 1, a DPR model trained on either a single dataset Natural Questions (NQ) (Kwiatkowski et al., 2019) or a combination of common QA datasets drastically underperforms the sparse BM25 baseline (49.7% vs 71.2% on average), with the gap on some question patterns reaching 60% absolute! Based on these results, we perform a deep dive into why a single dense model performs so poorly on these simple questions. We decouple the two distinct aspects of these questions: the entities and the question pattern, and identify what about these questions gives dense models such a hard time. We discover the dense model is only able to successfully answer questions based on common entities, quickly degrading on rarer entities. We also observe that dense models can generalize to unseen entities only when the question pattern is explicitly observed during training.
We end with two investigations of practical solutions towards addressing this crucial problem. First, we consider data augmentation and analyze the trade-off between single-and multi-task finetuning. Second, we consider a single fixed passage index and fine-tune specialized question encoders, leading to memory-efficient transfer to new questions.
We find that data augmentation, while able to close gaps on a single domain, is unable to consistently improve performance on unseen domains. We also find that building a robust passage encoder is crucial in order to successfully adapt to new domains. We view this study as one important step towards building universal dense retrieval models.

Background and Related Work
Sparse retrieval Before the emergence of dense retrievers, traditional sparse retrievers such as TF-IDF or BM25 were the de facto method in opendomain question-answering systems (Chen et al., 2017;Yang et al., 2019). These sparse models measure similarity using weighted term-matching between questions and passages and do not train on a particular data distribution. It is well-known that sparse models are great at lexical matching, but fail to capture synonyms and paraphrases.
Dense retrieval On the contrary, dense models (Lee et al., 2019;Karpukhin et al., 2020;Guu et al., 2020) measure similarity using learned representations from supervised QA datasets, leveraging pre-trained language models like BERT. In this paper, we use the popular dense passage retriever (DPR) model (Karpukhin et al., 2020) as our main evaluation, 3 and we also report the evaluation of REALM (Guu et al., 2020) in Appendix A. DPR models the retrieval problem using two encoders, namely the question and the passage encoders, initialized using BERT. DPR uses a contrastive objective during training, with in-batch negatives and hard negatives mined from BM25. During inference, a pre-defined large set of passages (e.g., 21-3 The detailed experimental settings are in Appendix B. million passages in English Wikipedia) are encoded and pre-indexed-for any test question, the top passages with the highest similarity scores are returned. Recently, other advances have been made in improving dense retrieval, including incorporating better hard negatives (Xiong et al., 2021;Qu et al., 2021), or fine-grained phrase retrieval (Lee et al., 2021). We leave them for future investigation.
Generalization problem Despite the impressive in-domain performance of dense retrievers, their capability of generalizing to unseen questions still remains relatively under-explored. Recently, Lewis et al. (2021a) discover that there is a large overlap between training and testing sets on popular QA benchmarks, concluding that current models tend to memorize training questions and perform significantly worse on non-overlapping questions. AmbER (Chen et al., 2021) test sets are designed to study the entity disambiguation capacities of passage retrievers and entity linkers. They find models perform much worse on rare entities compared to common entities. Similar to this work, our results show dense retrieval models generalize poorly, especially on rare entities. We further conduct a series of analyses to dissect the problem and investigate potential approaches for learning robust dense retrieval models. Finally, another concurrent work (Thakur et al., 2021) introduces the BEIR benchmark for zero-shot evaluation of retrieval models and shows that dense retrieval models underperform BM25 on most of their datasets.

EntityQuestions
In this section, we build a new benchmark Enti-tyQuestions, a set of simple, entity-centric questions and compare dense and sparse retrievers.
Dataset collection We select 24 common relations from Wikidata (Vrandečić and Krötzsch, 2014) and convert fact (subject, relation, object) triples into natural language questions using manually defined templates (Appendix A). To ensure the converted natural language questions are answerable from Wikipedia, we sample triples from the T-REx dataset (Elsahar et al., 2018), where triples are aligned with a sentence as evidence in Wikipedia. We select relations following the criteria: (1) there are enough triples (>2k) in the T-REx; (2) it is easy enough to formulate clear questions for the relation; (3) we do not select relations with only a few answer candidates (e.g., gender), which may cause too many false negatives when we evaluate the retriever; (4) we include both person-related relations (e.g., place-of-birth) and non-person relations (e.g., headquarter). For each relation, we randomly sample up to 1,000 facts to form the evaluation set. We report the averaged accuracy over all relations of EntityQuestions.

Results
We evaluate DPR and BM25 on the En-tityQuestions dataset and report results in Table 1 (see full results and examples in Appendix A). DPR trained on NQ significantly underperforms BM25 on almost all sets of questions. For example, on the question "Where was [E] born?", BM25 outperforms DPR by 49.8% absolute using top-20 retrieval accuracy. 4 Although training DPR on multiple datasets can improve the performance (i.e., from 49.7% to 56.7% on average), it still clearly pales in comparison to BM25. We note the gaps are especially large on questions about person entities.
In order to test the generality of our findings, we also evaluate the retrieval performance of REALM (Guu et al., 2020) on EntityQuestions. Compared to DPR, REALM adopts a pre-training task called salient span masking (SSM), along with an inverse cloze task from Lee et al. (2019). We include the evaluation results in Appendix A. 5 We find that REALM still scores much lower than BM25 over all relations (19.6% on average). This suggests that incorporating pre-training tasks such as SSM still does not solve the generalization problem on these simple entity-centric questions.

Dissecting the Problem: Entities vs. Question Patterns
In this section, we investigate why dense retrievers do not perform well on these questions. Specifically, we want to understand whether the poor generalization should be attributed to (a) novel entities, or (b) unseen question patterns. To do this, we study DPR trained on the NQ dataset and evaluate on three representative question templates: placeof-birth, headquarter, and creator. 6

Dense retrievers exhibit popularity bias
We first determine how the entity [E] in the question affects DPR's ability to retrieve relevant passages. To do this, we consider all triples in Wikidata that are associated with a particular relation, and order them based on frequency of the subject entity in Wikipedia. In our analysis, we use the Wikipedia hyperlink count as a proxy for an entity's frequency. Next, we group the triples into 8 buckets such that each bucket has approximately the same cumulative frequency. Using these buckets, we consider two new evaluation sets for each relation. The first (denoted "rand ent") randomly samples at most 1,000 triples from each bucket. The second (denoted "train ent") selects all triples within each bucket that have subject entities observed in questions within the NQ training set, as identified by ELQ (Li et al., 2020).
We evaluate DPR and BM25 on these evaluation sets and plot the top-20 accuracy in Figure 1. DPR performs well on the most common entities but quickly degrades on rarer entities, while BM25 is less sensitive to entity frequency. It is also no- seen during NQ training than on randomly selected entities. This suggests that DPR representations are much better at representing the most common entities as well as entities observed during training.

Observing questions helps generalization
We next investigate whether DPR generalizes to unseen entities when trained on the question pattern.
For each relation considered, we build a training set with at most 8, 000 triples. We ensure no tokens from training triples overlap with tokens from triples in the corresponding test set. In addition to using the question template used during evaluation to generate training questions, we also build a training set based on a syntactically different but semantically equal question template. 7 We fine-tune DPR models on the training set for each relation and test on the evaluation set of EntityQuestions for the particular relation and report results in Table 2. Clearly, observing the question pattern during training allows DPR to generalize well on unseen entities. On all three relations, DPR can match or even outperform BM25 in terms of retrieval accuracy. Training on the equivalent question pattern achieves comparable performance to the exact pattern, showing dense models do not rely on specific phrasing of the question. We also attempt fine-tuning the question encoder and passage encoder separately. As shown in Table 2, surprisingly, there is a significant discrepancy between only training the passage encoder (OnlyP) and only training the question encoder (OnlyQ): for example, on place-of-birth, DPR achieves 72.8% accuracy with the fine-tuned passage encoder, while it  To understand what passage representations have learned from fine-tuning, we visualize the DPR passage space before and after fine-tuning using t-SNE (Van der Maaten and Hinton, 2008). We plot the representations of positive passages sampled from NQ and place-of-birth in Figure 2. Before fine-tuning, positive passages for place-of-birth questions are clustered together. Discriminating passages in this clustered space is more difficult using an inner product, which explains why only finetuning the question encoder yields minimal gains. After fine-tuning, the passages are distributed more sparsely, making differentiation much easier.

Towards Robust Dense Retrieval
Equipped with a clear understanding of the issues, we explore some simple techniques aimed at fixing the generalization problem.
Data augmentation We first explore whether fine-tuning on questions from a single EntityQuestions relation can help generalize on the full set of EntityQuestions as well as other QA datasets such as NQ. We construct a training set of questions for a single relation and consider two training regimes: one where we fine-tune on relation questions alone; and a second where we fine-tune on both relation questions and NQ in a multi-task fashion. We perform this analysis for three relations and report top-20 retrieval accuracy in Table 3.
We find that fine-tuning only on a single relation improves EntityQuestions meaningfully, but degrades performance on NQ and still largely falls behind BM25 on average. When fine-tuning on both relation questions and NQ together, most of the performance on NQ is retained, but the gains on EntityQuestions are much more muted. Clearly, fine-tuning on one type of entity-centric question does not necessarily fix the generalization problem for other relations. This trade-off between accuracy on the original distribution and improvement on the new questions presents an interesting tension for universal dense encoders to grapple with.
Specialized question encoders While it is challenging to have one retrieval model for all unseen question distributions, we consider an alternative approach of having a single passage index and adapting specialized question encoders. Since the passage index is fixed across different question patterns and cannot be adapted using fine-tuning, having a robust passage encoder is crucial.
We compare two DPR passage encoders: one based on NQ and the other on the PAQ dataset (Lewis et al., 2021b). 8 We expect a question encoder trained on PAQ is more robust because (a) 10M passages are sampled in PAQ, which is arguably more varied than NQ, and (b) all the plausible answer spans are identified using automatic tools. We fine-tune a question encoder for each relation in EntityQuestions, keeping the passage encoder fixed. As shown in Table 4, 9 fine-tuning the encoder trained on PAQ improves performance over fine-tuning the encoder trained on NQ. This suggests the DPR-PAQ encoder is more robust and adaptable, nearly closing the gap with BM25 using a single passage index. We believe constructing a robust passage index is an encouraging avenue for 8 PAQ dataset sampling scheme is described in Appendix B. 9 Per-relation accuracy can be found in Appendix C.
EntityQ.  future work towards a more general retriever.

Conclusion
In this study, we show that DPR significantly underperforms BM25 on EntityQuestions, a dataset of simple questions based on facts mined from Wikidata. We derive key insights about why DPR performs so poorly on this dataset. We learn that DPR remembers robust representations for common entities, but has trouble differentiating rarer entities without explicitly observing the question pattern during training. We suggest future work in incorporating explicit entity memory into dense retrievers to help differentiate rare entities. Numerous recent works (Wu et al. 2020;Li et al. 2020;Cao et al. 2021) demonstrate retrievers can easily learn dense representations for a large number of Wikipedia entities. DPR could also leverage entity-aware embedding models like EaE (Févry et al., 2020) or LUKE (Yamada et al., 2020) to better recall long-tail entities.

Ethical Considerations
Our proposed dataset, EntityQuestions, is constructed by sampling (subject, relation, object) triples from Wikidata, which is dedicated to the public domain under the Creative Commons CC0 License. In general, machine learning has the ability to amplify biases presented implicitly and explicitly in the training data. Models that we reference in our study are based on BERT, which has been shown to learn and exacerbate stereotypes during training (e.g., Kurita et al. 2019, Tan and Celis 2019, Nadeem et al. 2021. We further train these models on Wikidata triples, which again has the potential to amplify harmful and toxic biases.
In the space of open-domain question answering, deployed systems leveraging biased pre-trained models like BERT will likely be less accurate or biased when asked questions related to stereotyped and marginalized groups. We acknowledge this fact and caution those who build on our work to consider and study this implication before deploying systems in the real world.

References
Petr Baudiš and Jan Šedivỳ. 2015. Modeling of the question answering task in the yodaqa system. In In-

B Experimental Details
Experimental settings of DPR In our experiments, we use either pre-trained DPR models released by the authors, or the DPR models re-trained by ourself (Table 4). All our experiments are carried out on 4× 11Gb Nvidia RTX 2080Ti GPUs. For all our fine-tuning experiments, we fine-tune for 10 epochs, with a learning rate 2 × 10 −5 and a batch size of 24. When we retrain DPR from scratch, we train for 20 epochs with a batch size of 24 (the original DPR models were trained on 8× 32Gb GPUs with a batch size of 128 and we have to reduce the batch size due to the limited computational resources) and a learning rate of 2 × 10 −5 .

C Per-relation Accuracy with Different Passage Encoders
We fine-tune DPR with the passage encoder fixed on either NQ or PAQ. Table 8 compares the perrelation accuracy of DPR with fixed passage encoder fine-tuned on NQ and PAQ. As is shown, the passage encoder trained on PAQ is much more robust than the passage encoder trained on NQ. For many non-person relations, using a PAQ-based passage encoder can outperform BM25.  Table 8: Top-20 retrieval accuracy on NQ and EntityQuestions (EQ). Per-rel FT: we fine-tune an individual question encoder for each relation. EQ FT: we fine-tune a single question encoder on all relations in EntityQuestions.