RELiC: Retrieving Evidence for Literary Claims

Humanities scholars commonly provide evidence for claims that they make about a work of literature (e.g., a novel) in the form of quotations from the work. We collect a large-scale dataset (RELiC) of 78K literary quotations and surrounding critical analysis and use it to formulate the novel task of literary evidence retrieval, in which models are given an excerpt of literary analysis surrounding a masked quotation and asked to retrieve the quoted passage from the set of all passages in the work. Solving this retrieval task requires a deep understanding of complex literary and linguistic phenomena, which proves challenging to methods that overwhelmingly rely on lexical and semantic similarity matching. We implement a RoBERTa-based dense passage retriever for this task that outperforms existing pretrained information retrieval baselines; however, experiments and analysis by human domain experts indicate that there is substantial room for improvement.


Introduction
When analyzing a literary work (e.g., a novel or short story), scholars make claims about the text and provide supporting evidence in the form of quotations from the work (Thompson, 2002;Finnegan, 2011;Graff et al., 2014). For example, Monaghan (1980) claims that Elizabeth, the main character in Jane Austen's Pride and Prejudice, doesn't just refuse an offer to join the standoffish bachelor Darcy and the wealthy Bingleys on their morning walk, "but does so in such a way as to group Darcy with the snobbish Bingley sisters," and then directly quotes Elizabeth's tongue-in-cheek rejection: "No, no; stay where you are. You are charmingly grouped, and appear to uncommon advantage. The picturesque would be spoilt by admitting a fourth." Literary scholars construct arguments like these by making complex connective inferences between their interpretations, framed as claims, and quota-tions (e.g., recognizing that Elizabeth says "charmingly grouped" and "picturesque" ironically in order to group Darcy with the snobbish Bingley sisters). This process requires a deep understanding of both literary phenomena, such as irony and metaphor, and linguistic phenomena (coreference, paraphrasing, and stylistics). In this paper, we computationally study the relationship between literary claims and quotations by collecting a largescale dataset for Retrieving Evidence for Literary Claims (RELiC), which contains 78K scholarly excerpts of literary analysis that each directly quote a passage from one of 79 widely-read English texts.
The complexity of the claims and quotations in RELiC makes it a challenging testbed for modern neural retrievers: given just the text of the claim and analysis that surrounds a masked quotation, can a model retrieve the quoted passage from the set of all possible passages in the literary work? This literary evidence retrieval task (see Figure 1) differs considerably from retrieval problems commonly studied in NLP, such as those used for fact checking (Thorne et al., 2018), open-domain QA (Chen et al., 2017;Chen and Yih, 2020), and text generation (Krishna et al., 2021), in the relative lack of lexical or even semantic similarity between claims and queries. Instead of latching onto surface-level cues, our task requires models to understand complex devices in literary writing and apply general theories of interpretation. RELiC is also challenging because of the large number of retrieval candidates: for War and Peace, the longest literary work in the dataset, models must choose from one of ∼ 32K candidate passages.
How well do state-of-the-art retrievers perform on RELiC? Inspired by recent research on dense passage retrieval (Guu et al., 2020;Karpukhin et al., 2020), we build a neural model (dense-RELiC) by embedding both scholarly claims and candidate literary quotations with pretrained RoBERTa networks (Liu et al., 2019), which are then fine-tuned …Elizabeth comes to Pemberley full of fear of being treated as an interloper, a trespasser; even before any plans of visiting the ancient house are made, the mention of visiting Derbyshire makes Elizabeth feel like a thief:

[masked quote]
She seems to be afraid of encountering, if not the horrors of a Gothic castle, at least the resentment of a stern aristocrat… It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife. (i=1) Step 1: compute context embedding c by passing the text of the literary claims and analysis that surrounds a missing quotation to a RoBERTa network "But surely," said she, "I may enter his county with impunity, and rob it of a few petrified spars without his perceiving me (i=4387) Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude… (i=7514) … … Step 2: compute candidate quotation embeddings q i by passing each sentence in the book through a separate RoBERTa model q 1 q 4387 q 7514 c Step 3: apply a contrastive objective to push the context vector q close (+) to the correct quotation vector (q 4387 ) and far (-) from all other candidates --+ Figure 1: An example of our literary evidence retrieval task and the model we built to solve it. The model must retrieve a missing quotation from Pride and Prejudice given the literary claims and analysis that surround the quotation. The retrieval candidate set for this example consists of all 7,514 sentences from Pride and Prejudice. Our dense-RELiC model is trained with a contrastive loss to push a learned representation of the surrounding context close to a representation of the ground-truth missing quotation (here, the 4,387 th sentence from the novel).
using a contrastive objective that encourages the representation for the ground-truth quotation to lie nearby to that of the claim. Both sparse retrieval methods such as BM25 and pretrained dense retrievers such as DPR and REALM perform poorly on RELiC, which underscores the difference between our dataset and existing information retrieval benchmarks (Thakur et al., 2021) on which these baselines are much more competitive. Our dense-RELiC model fares better than these baselines but still lags far behind human performance, and an analysis of its errors suggests that it struggles to understand complex literary phenomena. Finally, we qualitatively explore whether our dense-RELiC model can be used to support evidence-gathering efforts by researchers in the humanities. Inspired by prompt-based querying (Jiang et al., 2020), we issue our own out-ofdistribution queries to the model by formulating simple descriptions of events or devices of interest (e.g., symbols of Gatsby's lavish lifestyle) and discover that it often returns relevant quotations. To facilitate future research in this direction, we publicly release our dataset and models. 1

Collecting a Dataset for Literary Evidence Retrieval
We collect a dataset for the task of Retrieving Evidence for Literary Claims, or RELiC, the first large-scale retrieval dataset that focuses on the challenging literary domain. Each example in RELiC consists of two parts: (1) the context surround-

Collecting and Preprocessing RELiC
Selecting works of literature: We collect 79 primary source works written or translated into English 2 from Project Gutenberg and Project Gutenberg Australia. 3 These public domain sources were selected because of their popularity and status as members of the Western literary canon, which also yield more scholarship (Porter, 2018  of HathiTrust documents are scholarly in nature, so most of these matches yielded critical analysis of the 79 primary source works. We received permission from the HathiTrust to publicly release short windows of text surrounding each matching quotation. Filtering and preprocessing: The scholarly articles we collected from our HathiTrust queries were filtered to exclude duplicates and non-English sources. We then preprocessed the resulting text to remove pervasive artifacts such as in-line citations, headers, footers, page numbers, and word breaks using a pattern-matching approach (details in Appendix A). Finally, we applied sentence tokenization using spaCy's dependency parser-based sentence segmenter 5 to standardize the size of the windows in our dataset. Each window in RELiC contains the identified quotation and four sentences of claims and analysis 6 on each side of the quotation (see Table 2 for examples). To avoid asking models to retrieve a quote they have already seen during training, we create training, validation, and test splits such that primary sources in each fold are mutually exclusive. Statistics of our dataset sources are provided in Appendix A.3. Table 1 contains detailed statistics of RELiC. To the best of our knowledge, RELiC is the first retrieval dataset in the literary domain, and the only 5 https://spacy.io/, the default segmenter in spaCy is modified to use ellipses, colons, and semicolons as custom sentence boundaries, based on the observation that literary scholars often only quote part of what would typically be defined as a sentence. 6 The HathiTrust permitted us to release windows consisting of up to eight sentences of scholarly analysis. While more context is of course desirable, we note that (1) conventional model sizes are limited in input sequence length, and (2) context further away from the quoted material has diminishing value, as it is likely to be less relevant to the quoted span. one that requires understanding complex phenomena like irony and metaphor. We provide a detailed comparison of RELiC to other retrieval datasets in the recently-proposed BEIR retrieval benchmark (Thakur et al., 2021) in Appendix Table A6. RELiC has a much longer query length (157.7 tokens on average) than all BEIR datasets except Ar-guAna (Wachsmuth et al., 2018). Furthermore, our results in Section 3.3 show that while these longer queries confuse pretrained retriever models (which heavily rely on token overlap), a model trained on RELiC is able to leverage the longer queries for better retrieval.

Analyzing different types of quotation
What are the different ways in which literary scholars use direct quotation in RELiC? We perform a manual analysis of 200 held-out examples to gain a better understanding of quotation usage, categorizing each quotation into the following three types: Claim-supporting evidence: In 151 of the 200 annotated examples, literary scholars used direct quotation to provide evidence for a more general claim about the primary source work. In the first row of Table 2, Hartstein (1985) claims that "this whale... brings into focus such fundamental questions as the knowability of space:" and then quotes the following metaphorical description from Moby Dick as evidence: "And as for this whale spout, you might almost stand in it, and yet be undecided as to what it is precisely." When quoted material is used as claim-supporting evidence, the context before and after usually refers directly to the quoted material; 7 for example, the paradoxes of reality and uncertainties of this world are exemplified by the vague nature of the whale spout.
Paraphrase-supporting evidence: In 31 of the examples, we observe that scholars used the primary source work to support their own paraphrasing of the plot in order to contextualize later analysis. In the second row of Table 2, Blackstone (1972) uses the quoted material to enhance a summary of a specific scene in which Jacob's mind is wandering during a chapel service. Jacob's daydreaming is later used in an analysis of Cambridge as a location in Virginia Woolf's works, but no literary argument is made in the immediate context. When

Quote type
Preceding context, primary source quotation, subsequent context Claimsupporting evidence (153) If this whale inspires the most lyrical passages in the novel, it also brings into focus such fundamental questions as the knowability of space: And as for this whale spout, you might almost stand in it, and yet be undecided as to what it is precisely. But Ishmael stands before the paradoxes of reality with historical and scientific intellect, wisdom, and comic elasticity that accommodates-however tenuouslythe uncertainties of this world (Hartstein, 1985).
Paraphrasesupporting evidence (25) But then, suddenly, Jacob's thought switches back to the lantern under the tree, with the old toad and the beetles and the moths crossing from side to side in the light, senselessly. Now there was a scraping and murmuring. He caught Timmy Durrant's eye; looked very sternly at him; and then, very solemnly, winked. From a boat on the Cam there is another sort of beauty to be seen. There are buttercups gilding the meadows, and cows munching, and the legs of children deep in the grass. Jacob looks at all these things and becomes absorbed (Blackstone, 1972).

Claimsupporting evidence
The relationship between Alexandra and the earth is an intensely personal one: For the first time, perhaps, since that land emerged from the waters of geologic ages, a human face was set toward it with love and yearning... The religious connotations of the more lyrical descriptions of the land prepare us for the emergence of Alexandra as its goddess (Helmick, 1968).

Paraphrasesupporting evidence
O Pioneers! is the story of a Swedish immigrant, Alexandra Bergson, who some to Nebraska with her parents when she is young. Her father dies, and she has to take over the farm and look after her younger brothers. Her courage, vision, and energy bring life and civilization to the wilderness. As Alexandra faces the future after her father's death, Willa Cather writes: For the first time, perhaps, since that land emerged from the waters of geologic ages, a human face was set toward it with love and yearning. The history of every country begins in the heart of a man or a woman. Alexandra succeeds in taming the wild land, and after a heaping measure of material success and personal tragedy, she faces the future calmly. (Woodress, 1975). Table 2: Examples of the two major types of evidence identified in our manual analysis of RELiC. Claim-supporting evidence uses quotations to support more general literary claims, while paraphrase-supporting evidence uses quotations to corroborate summaries of the plot. The bottom two rows show the same quotation (from Willa Cather's O Pioneers!) being used as evidence in different ways, highlighting the dataset's complexity.
quoted material is being employed as paraphrasesupporting evidence, the surrounding context does not refer directly to the quotation.
Miscellaneous: 18 of the 200 samples were not literary analysis, though some were still related to literature (for example, analysis of the the film adaptation of The Age of Innocence). Others were excerpts from the primary sources that suffered from severe OCR artifacts and were not detected or extracted by the methods in Appendix A.2.

Literary Evidence Retrieval
Having established that the examples in RELiC contain complex interplay between literary quotation and scholarly analysis, we now shift to measuring how well neural models can understand these interactions. In this section, we first formalize our evidence retrieval task, which provides the scholarly context without the quotation as input to a model, along with a set of candidate passages that come from the same book, and asks the model to retrieve the ground-truth missing quotation from the candidates. Then, we describe standard information retrieval baselines as well as a RoBERTa-based ranking model that we implement to solve our task.

Task formulation
Formally, we represent a single window in RELiC from book b as (..., l −2 , l −1 , q n , r 1 , r 2 , ...) where q n is the quoted n-sentence long passage, and l i and r j correspond to individual sentences before and after the quotation in the scholarly article, respectively. The window size on each side is bounded by hyperparameters l max and r max , each of which can be up to 4 sentences. Given the l −lmax:−1 and r 1:rmax sentences surrounding the missing quotation, we ask models to identify the quoted passage q n from the candidate set C b,n , which consists of all n-sentence long passages in book b (see Figure 1). This is a particularly challenging retrieval task because the candidates are part of the same overall narrative and thus mention the same overall set of entities (e.g., characters, locations) and other plot elements, which is a disadvantage for methods based on string overlap.
Evaluation: Models built for our task must produce a ranked list of candidates C b,n for each example. We evaluate these rankings using both recall@k for k = 1, 3, 5, 10, 50, 100 and mean rank of q in the ranked list. Both types of metrics focus on the position of the ground-truth quotation  Table 3: Overall comparison of different systems and context sizes (L/R indicates the number of sentences on the left and right side of the missing quote) on the test set of RELiC using recall@k metrics, normalized to a maximum score of 100. Our trained dense-RELiC retriever significantly outperforms BM25 and all pretrained dense retrieval models. The average number of candidates per example is 4888. We report the accuracy of different systems 9 on a proxy task that we administered to human domain experts, which shows that there is huge room for improvement.
q in the ranked list, and neither gives special treatment to candidates that overlap with q. As such, recall@1 alone is overly strict when the quotation length l > 1, which is why we show recall at multiple values of k. An additional motivation is that there may be multiple different candidates that fit a single context equally well. We also report accuracy on a proxy task with only three candidates, which allows us to compare with human performance as described in Section 4.

Models
Baselines: Our baselines include both standard term matching methods as well as pretrained dense retrievers. BM25 (Robertson et al., 1995) is a bagof-words method that is very effective for information retrieval. We form queries by concatenating the left and right context and use the implementation from the rank_bm25 library 10 to build a BM25 model for each unique candidate set C b,n , tuning 8 ColBERT does not provide a ranking for candidates outside the top 1000, so we cannot report mean rank. 9 We do not report BM25's accuracy on the proxy task because its top-ranked quotes were used as candidates in the proxy task in addition to the ground-truth quotation.
10 https://github.com/dorianbrown/rank_ bm25, a library implementing many BM25-based algorithms. the free parameters as per Kamphuis et al. (2020). 11 Meanwhile, our dense retrieval baselines are pretrained neural encoders that map queries and candidates to vectors. We compute vector similarity scores (e.g., cosine similarity) between every query/candidate pair, which are used to rank candidates for every query and perform retrieval. We consider the following four pretrained dense retriever baselines in our work, which we deploy in a zero-shot manner (i.e., not fine-tuned on RELiC): • ColBERT is a ranking model from Khattab and Zaharia (2020) that estimates the relevance between a query and a document using contextualized late interaction. It is trained on MS MARCO ranking data (Nguyen et al., 2016).

Training retrievers on RELiC (dense-RELiC):
Both BM25 and the pretrained dense retriever baselines perform similarly poorly on RELiC (Table  3). These methods are unable to capture more complex interactions within RELiC that do not exhibit extensive string overlap between quotation and context. As such, we also implement a strong neural retrieval model that is actually trained on RELiC, using a similar setup to DPR and REALM. We first form a context string c by concatenating a window of sentences on either side of the quotation q (replaced by a MASK token), We train two encoder neural networks to project the literary context and quote to fixed 768-d vectors. Specifically, we project c and q using separate encoder networks initialized with a pretrained RoBERTa-base model (Liu et al., 2019). We use the <s> token of RoBERTa to obtain 768-d vectors for the context and quotation, which we denote as c i and q i . To train this model, we use a contrastive objective (Chen et al., 2020) that pushes the context vector c i close to its quotation vector q i , but away from all other quotation vectors q j in the same minibatch ("in-batch negative sampling"): 13 https://github.com/jwieting/ beyond-bleu where B is a minibatch. Note that the size of the minibatch |B| is an important hyperparameter since it determines the number of negative samples. 14 All elements of the minibatch are context/quotation pairs sampled from the same book. During inference, we rank all quotation candidate vectors by their dot product with the context vector.

Results
We report results from the baselines and our dense-RELiC model in Table 3 with varying context sizes where L/R refers to L preceding context sentences and R subsequent context sentences. While all models substantially outperform random candidate selection, all pretrained neural dense retrievers perform similarly to BM25, with ColBERT being the best pretrained neural retriever (2.9 recall@1). This result indicates that matching based on string overlap or semantic similarity is not enough to solve RELiC, and even powerful neural retrievers struggle on this benchmark. Training on RELiC is crucial: our best-performing dense-RELiC model performs 7x better than BM25 (9.4 vs 1.3 recall@1).
Context size and location matters for model performance: Table 3 shows that dense-RELiC effectively utilizes longer context -feeding only one sentence on each side of the quotation (1/1) is not as effective as a longer context (4/4) of four sentences on each side (7.8 vs 9.4 recall@1). However, the longer contexts hurt performance for pretrained dense retrievers in the zero-shot setting (1.6 vs 0.9 recall@1 for c-REALM), perhaps because context further away from the quotation is less likely to be helpful. Finally, we observe that dense-RELiC performance is strictly better (5.2 vs 6.8 recall@1) when the model is given only preceding context (4/0 or 1/0) compared to when the model is given only subsequent context (0/4 or 0/1).
Dense vs. sparse retrievers: As expected, BM25 retrieves the correct quotation when there is significant string overlap between the quotation and context, as in the following example from The Great Gatsby, in which the terms sky, bloom, Mrs. McKee, voice, call, and back appear in both places: 14 We set |B| = 100, and train all models for 10 epochs on a single RTX8000 GPU with an initial learning rate of 1e-5 using the Adam optimizer (Kingma and Ba, 2015), early stopping on validation loss. Models typically took 4 hours to complete 10 epochs. Our implementation uses the Hugging-Face transformers library (Wolf et al., 2020). The total number of model parameters is 249M.
Yet his analogy also implicitly unites the two women. Myrtle's expansion and revolution in the smoky air are also outgrowths of her surreal attributes, stemming from her residency in the Valley of Ashes. The late afternoon sky bloomed in the window for a moment like the blue honey of the Mediterranean-then the shrill voice of Mrs. McKee called me back into the room. The objective talk of Monte Carlo and Marseille has made Nick daydream. In Chapter I Daisy and the rooms had bloomed for him, with him, and now the sky blooms. The fact that Mrs. McKee's voice "calls him back" clearly reveals the subjective daydreamy nature of this statement.
However, this behavior is undesirable for most examples in RELiC, since string overlap is generally not predictive of the relationship between quotations and claims. The top row of Table 5 contains one such example, where dense-RELiC correctly chooses the missing quotation while BM25 is misled by string overlap.

Human performance and analysis
How well do humans actually perform on RELiC? To compare the performance of our dense retriever to that of humans, we hired six domain experts with at least undergraduate-level degrees in English literature from the Upwork 15 freelancing platform. Because providing thousands of candidates to a human evaluator is infeasible, we instead measure human performance on a simplified proxy task: we provide our evaluators with four sentences on either side of a missing quotation from Pride and Prejudice 16 and ask them to select one of only three candidates to fill in the blank. We obtain human judgments both to measure a human upper bound on this proxy task as well as to evaluate whether humans struggle with examples that fool our model.
Human upper bound: First, to measure a human upper bound on this proxy task, we chose 200 test set examples from Pride and Prejudice and formed a candidate pool for each by including BM25's top two ranked answers along with the ground-truth quotation for the single sentence case. As the task is trivial to solve with random candidates, we decided to use a model to select harder negatives, and we chose BM25 to see if humans would be distracted by high string overlap in the negatives. Each of the 200 examples was separately annotated by three experts, and they were 15 https://upwork.com 16 We decided to keep our proxy task restricted to the most well-known book in our test set because of the ease with which we could find highly-qualified workers who self-reported that they had read (and often even re-read) Pride and Prejudice. paid $100 for annotating 100 examples. The last column of Table 3 compares all of our baselines along with dense-RELiC against human domain experts on this proxy task. Humans substantially outperform all models on the task, with at least two of the three domain experts selecting the correct quote 93.5% of the time; meanwhile, the highest score for dense-RELiC is 67.5%, which indicates huge room for improvement. Interestingly, all of the zero-shot dense retrievers except ColBERT 1/1 underperform random selection on this task; we theorize that this is because all of these retrievers are misled by the high string overlap of the negative BM25-selected examples.  Table 4: Inter-annotator agreement of our three human annotators compared to a random annotation. In our 3-way classification task, all three annotators chose the same option 68.5% of the time, while they each chose a different option in just 0.5% of instances. Our annotators also show substantial agreement in terms of Fleiss Kappa (Fleiss, 1971). 17 Human error analysis of dense-RELiC: To evaluate the shortcomings of our dense-RELiC retriever, we also administered a version of the proxy task where the candidate pool included the ground-truth quotation along with dense-RELiC's two top-ranked candidates, where for all examples the model ranked the ground-truth outside of the top 1000 candidates. Three domain experts attempted 100 of these examples and achieved an accuracy of 94%, demonstrating that humans can easily disambiguate cases on which our model fails, though we note our model's poorer performance when retrieving a single sentence (as in the proxy task) versus multiple sentences (A5). The bottom two rows of Table 5 contain instances in which all human annotators agreed on the correct candidate but dense-RELiC failed to rank it in the top 1000.
In one, all human annotators immediately recognized the opening line of Pride and Prejudice, one 17 In our proxy task each instance has a different set of candidate quotations, which we randomly shuffle before showing annotators. Since our task is not strictly categorical, while computing Fleiss Kappa we define "category" as the option number shown to annotators. We believe this definition is closest to the free-marginal nature of our task (Randolph, 2010).  , 1967) [Human]: It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.
[dense-RELiC]: "My dear Mr. Bennet," said his lady to him one day, "have you heard that Netherfield Park is let at last?" Human readers can immediately identify the first sentence of Pride and Prejudice, while dense-RELiC lacks this world knowledge.

Sometimes we hear Mrs
Bennet's idea of marriage as a market in a single word: [masked quote] Her stupidity about other people shows in all her dealings with her family... (McEwan, 1986) [Human]: "I do not blame Jane," she continued, "for Jane would have got Mr. Bingley if she could." [dense-RELiC]: You must and shall be married by a special licence.
Human readers understood the uncommon usage of "got" to convey a transaction.
Table 5: Examples that show failure cases of BM25 (top row) and our dense-RELiC retriever (bottom two rows) from our proxy task on Pride and Prejudice. BM25 is easily misled by string overlap, while dense-RELiC lacks world knowledge (e.g., knowing the famous first sentence) and complex linguistic understanding (e.g., the relationship between marriage as a market and got) that humans can easily rely on to disambiguate the correct quotation.
of the most famous in English literature. In the other, the claim mentions that the interpretation hinges on a single word's ("got") connotation of "a market," which humans understood.
Issuing out-of-distribution queries to the retriever: Does our dense-RELiC model have potential to support humanities scholars in their evidence-gathering process? Inspired by promptbased learning, we manually craft simple yet out-ofdistribution prompts and queried our dense-RELiC retriever trained with 1 sentence of left context and no right context. A qualitative inspection of the top-ranked quotations in response to these prompts (Table 6) reveals that the retriever is able to obtain evidence for distinct character traits, such as the ignorance of the titular character in Frankenstein or Gatsby's wealthy lifestyle in The Great Gatsby. More impressively, when queried for an example from Pride and Prejudice of the main character, Elizabeth, demonstrating frustration towards her mother, the retriever returns relevant excerpts in the first-person that do not mention Elizabeth, and the top-ranked quotations have little to no string overlap with the prompts.
Limitations: While these results show dense-RELiC's potential to assist research in the humanities, the model suffers from the limited expressivity of its candidate quotation embeddings q i , and addressing this problem is an important direction for future work. The quotation embeddings do not incorporate any broader context from the narrative, which prevents resolving coreferences to pronominal character mentions and understanding other important discourse phenomena. For example, Table  A5 shows that dense-RELiC 's top two 1-sentence candidates for the above Pride and Prejudice example are not appropriate evidence for the literary claim; the increased relevancy of the 2-sentence candidates ( Table 6, third row) over the 1-sentence candidates suggests that dense-RELiC may benefit from more contextualized quotation embeddings. Furthermore, dense-RELiC struggles with retrieving concepts unique to a text, such as the "hypnopaedic phrases" strewn throughout Brave New World (Table 6, bottom).

Related Work
Datasets for literary analysis: Our work relates to previous efforts to apply NLP to literary datasets From Frankenstein, given "Victor does not consider the consequences of his actions:" our model's top-ranked single sentence candidates are: 1. It is even possible that the train of my ideas would never have received the fatal impulse that led to my ruin. 2. The threat I had heard weighed on my thoughts, but I did not reflect that a voluntary act of mine could avert it. 3. Now my desires were complied with, and it would, indeed, have been folly to repent.
From The Great Gatsby, given "A symbol of Gatsby's lifestyle:" our model's top-ranked single sentence candidates are: 1. His movements-he was on foot all the time-were afterward traced to Port Roosevelt and then to Gad's Hill where he bought a sandwich that he didn't eat and a cup of coffee. 2. Every Friday five crates of oranges and lemons arrived from a fruiterer in New York-every Monday these same oranges and lemons left his back door in a pyramid of pulpless halves. 3. On week-ends his Rolls-Royce became an omnibus, bearing parties to and from the city, between nine in the morning and long past midnight, while his station wagon scampered like a brisk yellow bug to meet all trains.
From Pride and Prejudice, given "Elizabeth displays frustration towards her mother:" our model's top-ranked 2-sentence candidates are: 1. Oh, that my dear mother had more command over herself! She can have no idea of the pain she gives me by her continual reflections on him. 2. My mother means well; but she does not know, no one can know, how much I suffer from what she says. 3. with tears and lamentations of regret, invectives against the villainous conduct of Wickham, and complaints of her own sufferings and ill-usage; blaming everybody but the person to whose ill-judging indulgence the errors of her daughter must principally be owing.
From Brave New World, given "Children are indoctrinated while sleeping and taught hypnopaedic phrases, such as", our model's top-ranked single sentence candidates are: 1. The principle of sleep-teaching, or hypnopaedia, had been discovered. 2. Roses and electric shocks, the khaki of Deltas and a whiff of asafoetida-wedded indissolubly before the child can speak. 3. Told them of the growing embryo on its bed of peritoneum. Table 6: Given a novel and a short out-of-distribution prompt, this table shows the top 3 quotations from the novel that dense-RELiC returns as evidence. The relevance of many of the returned quotations, even without string overlap between the prompt and candidates, indicates the model is learning some non-trivial relationships that could have potential impact for building tools that support humanities research. However, it is not perfect, as shown in the final example where none of the retrieved quotations is actually an instance of a hypnopaedic phrase. such as LitBank (Bamman et al., 2019;Sims et al., 2019), an annotated dataset of 100 works of fiction with annotations of entities, events, coreferences, and quotations. Papay and Padó (2020) introduced RiQuA, an annotated dataset of quotations in English literary text for studying dialogue structure, while Chaturvedi et al. (2016) andIyyer et al. (2016) characterize character relationships in novels. Our work also relates to quotability identification (MacLaughlin and Smith, 2021), which focuses on ranking passages in a literary work by how often they are quoted in a larger collection. Unlike RELiC, however, these datasets do not contain literary analysis about the works.
Retrieving cited material: Citation retrieval closely relates to RELiC and has a long history of research, mostly on scientific papers: O'Connor (1982) formulated the task of document retrieval using "citing statements", which Liu et al. (2014) revisit to create a reference retrieval tool that recommends references given context. Bertin et al. (2016) examine the rhetorical structure of citation contexts. Perhaps closest to RELiC is the work of Grav (2019), which concentrates on the quotation of secondary sources in other secondary sources, unlike our focus on quotation from primary sources. Finally, as described in more detail in Section 2.2 and Appendix A6, RELiC differs significantly from existing NLP and IR retrieval datasets in domain, linguistic complexity, and query length.

Conclusion
In this work, we introduce the task of literary evidence retrieval and an accompanying dataset, RELiC. We find that direct quotation of primary sources in literary analysis is most commonly used as evidence for literary claims or arguments. We train a dense retriever model for our task; while it significantly outperforms baselines, human performance indicates a large room for improvement. Important future directions include (1) building better models of primary sources that integrate narrative and discourse structure into the candidate representations instead of computing them out-of-context, and (2) integrating RELiC models into real tools that can benefit humanities researchers.

Ethical Considerations
We acknowledge that the group of authors from whom we selected primary sources lacks diversity because we selected from among digitized, public domain sources in the Western literary canon, which is heavily biased towards white, male writers. We made this choice because there are relatively few primary sources in the public domain that are written by minority authors and also have substantial amounts of literary analysis written about them. We hope that our data collection approach will be followed by those with access to copyrighted texts in an effort to collect a more diverse dataset. The experiments involving humans were reviewed by the UMass Amherst IRB with a status of Exempt. Appendices for "RELiC: Retrieving Evidence from Literature in Context"

A Dataset Collection & Statistics
Filtering secondary sources: The HathiTrust is not exclusively a repository of literary analysis, and we observe that many matching quotes come from different editions of a primary source, writing manuals, and even advertisements. Because we are seeking only scholarly work that directly analyzes the quoted sentences, we performed a combination of manual and automatic filtering to remove such extraneous matches. For each primary source, we first aggregate all secondary sources matches by the their unique HathiTrust-assigned identifier. From manual inspection of the secondary source titles, most sources that quote a particular literary work only once or twice are not likely to be literary scholarship, while sources with hundreds of matches are almost always a different edition of the primary source itself. For each primary source, we create upper and lower thresholds for number of matches, discarding sources that fall outside of these bounds. Additionally, we discard secondary sources whose titles contain the words "dictionary", "anthology", "encyclopedia," and others that indicate that a secondary source is not literary scholarship.
Preprocessing: After the above filtering, we identified and removed all non-English secondary sources using langid, 17 a Python tool for language identification. Next, because the secondary source texts in the HathiTrust are digitized via OCR, various artifacts appear throughout the pages we download. Some of these, such as citations that include the page number of primary source quotes, allow models trained on our task to "cheat" to identify the proper quote (see Table A1), necessitating their removal. Using a pattern-matching approach, we eliminate the most pervasive: in-line citations, headers, footers, and word breaks. Finally, we apply sentence tokenization in order to standardize the length of preceding and subsequent context windows for the final dataset. Specifically, we feed the preprocessed text through spaCy's 18 dependency parser-based sentence segmenter on the cleaned text. The default segmenter in spaCy is modified to use ellipses, colons, and semicolons as custom sentence boundaries, based on the observation that literary scholars often only quote part of what would 17 https://github.com/saffsd/langid.py 18 https://spacy.io/ typically be defined as a sentence (Table A2).

Raw text from HathiTrust:
The prejudice in these same eyes, however, keeps them "less clear-sighted" (p. 149) to Bingley's feelings for Jane and totally closed to the real worth-lessness of Wickham and worth of Darcy. When Jane's letter reporting 196 Mark M. Hennelly, Jr. Lydia's disappearance with Wickham confirms Darcy's earlier indictment of him, though, Elizabeth's "eyes were opened to his real character" (p. 277). Quoted span in context of literary analysis: Edna tries to discuss this issue of possession versus selfpossession with Madame Ratignolle but to no avail; 'the two women did not appear to understand each other or to be talking the same language.' Madame Ratignolle cannot comprehend that there might be something more that a mother could sacrifice for her children beyond her life...

Quote in original context from The Awakening:
Edna had once told Madame Ratignolle that she would never sacrifice herself for her children, or for any one. Then had followed a rather heated argument; the two women did not appear to understand each other or to be talking the same language. Edna tried to appease her friend, to explain.  (2000) that quotes part of a sentence (following a semi-colon) from the primary source. We detect such partial matches during preprocessing.
Identifying quoted sentences: As previously mentioned, HathiTrust does not provide the exact indices corresponding to the primary source quote. As such, we identify which secondary source sentences (from the output of the sentence tokenizer) include quotes from primary source works using RapidFuzz, 19 a fuzzy string match library, with the QRatio metric and a score threshold of 80.0. Fuzzy match is essential for detecting quotes with OCR mistakes or with author modifications; in Appendix  Table A3, for instance, the author adds clarification [the natives] and omits "he would say" when citing two sentences from Joseph Conrad's Heart of Darkness. Once a fuzzy match is identified in a secondary source document, we replace it with its corresponding primary source sentence.

Secondary source material:
Kurtz's credo, like his royal employer's, was a simple one. 1. "You show them [the natives] you have in you something that is really profitable, and then there will be no limits to the recognition of your ability. 2. Of course you must take care of the motives-right motives-always." Kurtz dies screaming: "The Horror! The Horror!" Leopold, so far as one knows, died more peacefully (Legum, 1972).
Window in RELiC with standardized quote: Kurtz's credo, like his royal employer's, was a simple one. 'You show them you have in you something that is really profitable, and then there will be no limits to the recognition of your ability,' he would say. 'Of course you must take care of the motives-right motives-always.' Kurtz dies screaming: "The Horror! The Horror!" Leopold, so far as one knows, died more peacefully. Handling ellipses: One prevalent technique for direct quotation in literary analysis is the use of ellipses to condense primary source material. As our fuzzy match method still falls short in detecting block quotes that contain ellipses, we implement an additional method for insuring that block quotes are properly delineated. Once the fuzzy match approach fails to identify any more consecutively quoted sentences in a secondary source, we continue to search for matches adjacent to the block quote using the Longest Common Substring (LCS) metric. If a block-quote-adjacent sentence in the secondary source shares an LCS of 15 or more characters with the block-quote-adjacent sentence in the primary source, this is considered a match and concatenated with the block quote (see Appendix A.1 for an example).

A.1 LCS example
For example, in Parker (1985), Kenneth Parker cites a passage from Joseph Conrad's Heart of Darkness: "The narrator, Marlow, informs us, approvingly:...I met a white man, in such an unexpected elegance of get-up that in the first moment I took him for a sort of vision. I saw a high starched collar, white cuffs, a light alpaca jacket, snowy trousers, a clean necktie, and varnished boots." Fuzzy match alone is insufficient for detecting the first sentence in this block quote that contains an ellipse in place of primary source text.

Window of secondary source analysis:
For example, Elizabeth's anger with herself, after reading Darcy's letter, is couched largely in the vocabulary of rectifiable intellectual error"blind, partial, prejudiced, absurd, and the like-rather than in the relentless, coercive vocabulary of moral contrition. Her discomfiture, though profound, has a Greek ring to it: Till this moment I never knew myself. Heuristically, the distinction between moral and other spheres of value throws light also on other Austen novels that we can only glance at here (Wilkie, 1992).
Best model's top ranked candidate: that loss of virtue in a female is irretrievable; Best model's second ranked candidate but when she considered how unjustly she had condemned and upbraided him, her anger was turned against herself; With our LCS approach, we are able to replace the first sentence of block quote above with "When near the buildings I met a white man, in such an unexpected elegance of get-up that in the first moment I took him for a sort of vision." A.2 Noise when standardizing quotes: In a small number of cases, our quote standardization process removes important context. For example, the analysis of Maes-Jelinek (1970) quotes a sentence from D.H. Lawrence's The Rainbow as "As to Will, his intimate life was so violently active, that it set another man free in him.". After standardization, the example in our dataset becomes "His intimate life was so violently active, that it set another man free in him.", dropping the critical "As to Will" necessary for the integration of the quote in the surrounding analysis.
Model-predicted quotes are sometimes as valid as the gold quote: Human raters also identify cases in which multiple quotes appear to be appropriate evidence for a literary claim, which illustrate the model's potential in helping humanities scholars find evidence. In Table A4, both model and experts failed to identify the correct quote that both depicts Elizabeth's "discomfiture" and has a "Greek ring to it:" "Till this moment I never knew myself." However, the experts all selected the model's second ranked choice which mentions Elizabeth's "anger" at "herself." This quote also shows Elizabeth's displeasure while referring to the Greek idea of self.

A.3 More dataset statistics
Each primary source has relevant windows from an average of 112 unique secondary sources, and an average of 16.35% of the sentences in each primary source are quoted in secondary sources. On average, each primary source has 995 corresponding windows in our dataset, and each secondary source produced an average of 9 windows. Figure 2 shows the distribution of quote lengths in RELiC, suggesting that successful models will have to learn to understand both single-sentence and block quotes in context.

B Best Model Detailed Results
Candidate length does not significantly affect model performance: We observe in Table A9 that the length of the ground-truth quote and the candidates does not significantly impact model performance -for a fixed k, model performance is within 10% for any candidate length. Model performance is slightly worse for longer candidates of length 4 or 5, and for the shortest single sentence contexts (possibly due to under-specification). Table A5: When querying the model using out-of-distribution prompts, number of sentences of the desired candidates can be specified. This table shows the top 3 quotations from the Pride and Prejudice that dense-RELiC returns as evidence for single-sentence candidates. The suitability of the 2-sentence candidates (show in Table 6) over the single-sentence candidates suggests that contextualizing the quotation embeddings will improve model performance.  Table A6: A comparison between datasets in the BEIR benchmark and our RELiC dataset. Ours is the first retrieval dataset in the literary domain, formulating a new task of literary evidence retrieval.