Solving Hard Coreference Problems

Coreference resolution is a key problem in natural language understanding that still escapes reliable solutions. One fundamental difficulty has been that of resolving instances involving pronouns since they often require deep language understanding and use of background knowledge. In this paper, we propose an algorithmic solution that involves a new representation for the knowledge required to address hard coreference problems, along with a constrained optimization framework that uses this knowledge in coreference decision making. Our representation, Predicate Schemas, is instantiated with knowledge acquired in an unsupervised way, and is compiled automatically into constraints that impact the coreference decision. We present a general coreference resolution system that significantly improves state-of-the-art performance on hard, Winograd-style, pronoun resolution cases, while still performing at the state-of-the-art level on standard coreference resolution datasets.


Introduction
Coreference resolution is one of the most important tasks in Natural Language Processing (NLP).Although there is a plethora of works on this task (Soon et al., 2001a;Ng and Cardie, 2002a;Ng, 2004;Bengtson and Roth, 2008;Pradhan et al., 2012;Kummerfeld and Klein, 2013;Chang et al., 2013), it is still deemed an unsolved problem due to intricate and ambiguous nature of natural language * These authors contributed equally to this work.
text.Existing methods perform particularly poorly on pronouns, specifically when gender or plurality information cannot help.In this paper, we aim to improve coreference resolution by addressing these hard problems.Consider the following examples: Ex.1 [A bird] e 1 perched on the [limb] e 2 and [it] pro bent.Ex.2 [Robert] e 1 was robbed by [Kevin] e 2 , and [he] pro is arrested by police.
In both examples, one cannot resolve the pronouns based on only gender or plurality information.Recently, Rahman and Ng (2012) gathered a dataset containing 1886 sentences of such challenging pronoun resolution problems (referred to later as the Winograd dataset, following Winograd (1972) and Levesque et al. (2011)).As an indication to the difficulty of these instances, we note that a state-ofthe-art coreference resolution system (Chang et al., 2013) achieves precision of 53.26% on it.A special purpose classifier (Rahman and Ng, 2012) trained on this data set achieves 73.05%.The key contribution of this paper is a general purpose, state-of-theart coreference approach which, at the same time, achieves precision of 76.76% on these hard cases.
Addressing these hard coreference problems requires significant amounts of background knowledge, along with an inference paradigm that can make use of it in supporting the coreference decision.Specifically, in Ex.1 one needs to know that "a limb bends" is more likely than "a bird bends".In Ex.2 one needs to know that the subject of the verb "rob" is more likely to be the object of "arrest" than the object of the verb "rob" is.The knowledge required is, naturally, centered around the key predicates in the sentence, motivating the central notion proposed in this paper, that of Predicate Schemas.In this paper, we develop the notion of Predicate Schemas, instantiate them with automatically acquired knowledge, and show how to compile it into constraints that are used to resolve coreference within a general Integer Linear Programming (ILP) driven approach to coreference resolution.Specifically, we study two types of Predicate Schemas that, as we show, cover a large fraction of the challenging cases.The first specifies one predicate with its subject and object, thus providing information on the subject and object preferences of a given predicate.The second specifies two predicates with a semantically shared argument (either subject or object), thus specifies role preferences of one predicate, among roles of the other.We instantiate these schemas by acquiring statistics in an unsupervised way from multiple resources including the Gigaword corpus, Wikipedia, Web Queries and polarity information.
A lot of recent work has attempted to utilize similar types of resources to improve coreference resolution (Rahman and Ng, 2011a;Ratinov and Roth, 2012;Bansal and Klein, 2012;Rahman and Ng, 2012).The common approach has been to inject knowledge as features.However, these pieces of knowledge provide relatively strong evidence that loses impact in standard training due to sparsity.Instead, we compile our Predicate Schemas knowledge automatically, at inference time, into constraints, and make use of an ILP driven framework (Roth and Yih, 2004) to make decisions.Using constraints is also beneficial when the interaction between multiple pronouns is taken into account when making global decisions.Consider the following example: Ex.3 [Jack] e 1 threw the bags of [John] e 2 into the water since [he] pro 1 mistakenly asked [him] pro 2 to carry [his] pro 3 bags.
In order to correctly resolve the pronouns in Ex.3, one needs to have the knowledge that "he asks him" indicates that he and him refer to different entities (because they are subject and object of the same predicate; otherwise, himself should be used instead of him).This knowledge, which can be easily represented as constraints during inference, then impacts other pronoun decisions in a global decision with re-spect to all pronouns: pro 3 is likely to be different from pro 2 , and is likely to refer to e 2 .This type of inference can be easily represented as a constraint during inference, but hard to inject as a feature.
We then incorporate all constraints into a general coreference system (Chang et al., 2013) utilizing the mention-pair model (Ng and Cardie, 2002b;Bengtson and Roth, 2008;Stoyanov et al., 2010).A classifier learns a pairwise metric between mentions, and during inference, we follow the framework proposed in Chang et al. ( 2011) using ILP.
The main contributions of this paper can be summarized as follows: 1. We propose the Predicate Schemas representation and study two specific schemas that are important for coreference.2. We show how, in a given context, Predicate Schemas can be automatically compiled into constraints and affect inference.3. Consequently, we address hard pronoun resolution problems as a standard coreference problem and develop a system1 which shows significant improvement for hard coreference problems while achieving the same state-of-the-art level of performance on standard coreference problems.
The rest of the paper is organized as follows.We describe our Predicate Schemas in Section 2 and explain the inference framework and automatic constraint generation in Section 3. A summary of our knowledge acquisition steps is given in Section 4. We report our experimental results and analysis in Section 5, and review related work in Section 6.

Predicate Schema
In this section we present multiple kinds of knowledge that are needed in order to improve hard coreference problems.Table 1 provides two example sentences for each type of knowledge.We use m to refer to a mention.A mention can either be an entity e or a pronoun pro.pred m denotes the predicate of m (similarly, pred pro and pred e for pronouns and entities, respectively).For instance, in sentence 1.1 in   is pred e 1 = pred e 2 ="perch on".cn refers to the discourse connective (cn="and" in sentence 1.1).a denotes an argument of pred m other than m.For example, in sentence 1.1, assuming that m = e 1 , the corresponding argument is a = e 2 .We represent the knowledge needed with two types of Predicate Schemas (as depicted in Table 2).To solve the assignment of [it] pro in sentence 1.1, as mentioned in Section 1, we need the knowledge that "a limb bends" is more reasonable than "a bird bends".Note that the predicate of the pronoun is playing a key role here.Also the entity mention it-self is essential.Similarly, for sentence 1.2, to resolve [it] pro , we need the knowledge that "bee had pollen" is more reasonable than "flower had pollen".Here, in addition to entity mention and the predicate (of the pronoun), we need the argument which shares the predicate with the pronoun.To formally define the type of knowledge needed we denote it with "pred m (m, a)" where m and a are a mention and an argument, respectively2 .We use S(.) to denote the score representing how likely the combination of the predicate-mention-argument is.For each schema, we use several variations by either changing the order of the arguments (subj.vs obj.) or dropping either of them.We score the various Type 1 and Type 2 schemas (shown in Table 3) differently.The first row of Table 2 shows how Type 1 schema is being used in the case of Sentence 1.2.
For sentence 2.2, we need to have the knowledge that the subject of the verb phrase "be afraid of" is more likely than the object of the verb phrase "be afraid of" to be the subject of the verb phrase "get scared".The structure here is more complicated than that of Type 1 schema.To make it clearer, we analyze sentence 2.1.In this sentence, the object of "be robbed by" is more likely than the subject of the verb phrase "be robbed by" to be the object of "the officer arrest".We can see in both examples (and for the Type 2 schema in general), that both predicates (the entity predicate and the pronoun predicate) play a crucial role.Consequently, we design the Type 2 schema to capture the interaction between the entity predicate and the pronoun predicate.In addition to the predicates, we may need mention-argument information.Also, we stress the importance of the discourse connective between entity mention and pronoun; if in either sentence 2.1 or 2.2, we change the discourse connective to "although", the coreference resolution will completely change.Overall, we can represent the knowledge as "pred m (m, a) | pred m (m, a) , cn".Just like for Type 1 schema, we can represent Type 2 schema with a score function for different variations of arguments (lower half of Table 3).In Table 2, we exhibit this for sentence 2.2.
Type 3 contains the set of instances which cannot be solved using schemas of Type 1 or 2. Two such examples are included in Table 1.In sentence 3.1 and 3.2, the context containing the necessary information goes beyond our triple representation and therefore this instance cannot be resolved with either of the two schema types.It is important to note that the notion of Predicate Schemas is more general than the Type 1 and Type 2 schemas introduced here.Designing more informative and structured schemas will be essential to resolving additional types of hard coreference instances.

Constrained ILP Inference
Integer Linear Programming (ILP) based formulations of NLP problems (Roth and Yih, 2004) have been used in a board range of NLP problems and, particularly, in coreference problems (Chang et al., 2011;Denis and Baldridge, 2007).Our formulation is inspired by Chang et al. (2013).Let M be the set of all mentions in a given text snippet, and P the set of all pronouns, such that P ⊂ M. We train a coreference model by learning a pairwise mention scoring function.Specifically, given a mention-pair (u, v) ∈ M (u is the antecedent of v), we learn a left-linking scoring function f u,v = w ⊤ φ(u, v), where φ(u, v) is a pairwise feature vector and w is the weight vector.We then follow the Best-Link ap-proach (Section 2.3 from Chang et al. ( 2011)) for inference.The ILP problem that we solve is formally defined as follows: Constraints from Predicate Schemas Knowledge Constraints between pronouns.
Here, u, v are mentions and y u,v is the decision variable to indicate whether or not mention u and mention v are coreferents.As the first constraint shows, y u,v is a binary variable.y u,v equals 1 if u, v are coreferents and 0 otherwise.The second constraint indicates that we only choose at most one antecedent to be coreferent with each mention v. (u < v represents that u appears beore v, thus u is an antecedent of v.) In this work, we add constraints from Predicate Schemas Knowledge and between pronouns.
The Predicate Schemas knowledge provides a vector of score values S(u, v) for mention pairs {(u, v)|(u ∈ M, v ∈ P}, which concatenates all the schemas involving u and v. Entries in the score vector are designed so that the larger the value is, the more likely u and v are to be coreferents.We have two ways to use the score values: 1) Augumenting the feature vector φ(u, v) with these scores.2) Casting the scores as constraints for the coreference resolution ILP in one of the following forms: where s i (.) is the i-th dimension of the score vector S(.) corresponding to the i-th schema represented for a given mention pair.α i and β i are threshold values which we tune on a development set. 3 If an inequality holds for all relevant schemas (that is, all the dimensions of the score vector), we add an inequality between the corresponding indicator variables inside the ILP. 4 As we increase the value of a threshold, the constraints in (1) become more conservative, thus it leads to fewer but more reliable constraints added into the ILP.We tune the threshold values such that their corresponding scores attain high enough accuracy, either in the multiplicative form or the additive form.5Note that, given a pair of mentions and context, we automatically instantiate a collection of relevant schemas, and then generate and evaluate a set of corresponding constraints.To the best of our knowledge, this is the first work to use such automatic constraint generation and tuning method for coreference resolution with ILP inference.In Section 4, we describe how we acquire the score vectors S(u, v) for the Predicate Schemas in an unsupervised fashion.
We now briefly explain the pre-processing step required in order to extract the score vector S(u, v) from a pair of mentions.Define a triple structure t m pred m (m, a m ) for any m ∈ M. The subscript m for pred and a, emphasizes that they are extracted as a function of the mention m.The extraction of triples is done by utilizing the dependency parse tree from the Easy-first dependency parser (Goldberg and Elhadad, 2010).We start with a mention m, and extract its related predicate and the other argument based on the dependency parse tree and partof-speech information.To handle multiword predicates and arguments, we use a set of hand-designed rules.We then get the score vector S(u, v) by concatenating all scores of the Predicate Schemas given two triples t u , t v .Thus, we can expand the score representation for each type of Predicate Schemas given in Table 2: 1 In additional to schema-driven constraints, we also apply constraints between pairs of pronouns within a fixed distance7 .For two pronouns that are semantically different (e.g. he vs. it), they must refer to different antecedents.For two non-possesive pronouns that are related to the same predicate (e.g. he saw him), they must refer to different antecedents.8

Knowledge Acquisition
One key point that remains to be explained is how to acquire the knowledge scores S(u, v).In this section, we propose multiple ways to acquire these scores.In the current implementation, we make use of four resources.Each of them generates its own score vector.Therefore, the overall score vector is the concatenation of the score vector from each resource:

Gigaword Co-occurence
We extract triples t m pred m (m, a m ) (explained in Section 3) from Gigaword data (4,111,240 documents).We start by extracting noun phrases using the Illinois-Chunker (Punyakanok and Roth, 2001).For each noun phrase, we extract its head noun and then extract the associated predicate and argument to form a triple.
We gather the statistics for both schema types after applying lemmatization on the predicates and arguments.Using the extracted triples, we get a score vector from each schema type: To extract scores for Type 1 Predicate Schemas, we create occurence counts for each schema instance.After all scores are gathered, our goal is to query S (1) giga (u, v) ≡ S(pred v (m = u, a = a v )) from our knowledge base.The returned score is the log(.) of the number of occurences.
For Type 2 Predicate Schemas, we gather the statistics of triple co-occurence.We count the cooccurrence of neighboring triples that share at least one linked argument.We consider two triples to be neighbors if they are within a distance of three sentences.We use two heuristic rules to decide whether a pair of arguments between two neighboring triples are coreferents or not: 1) If the head noun of two arguments can match, we consider them coreferents.
2) If one argument in the first triple is a person name and there is a compatible pronoun (based on its gender and plurality information) in the second triple, they are also labeled as coreferents.We also extract the discourse connectives between triples (because, Our method is related, but different from the proposal in Balasubramanian et al. (2012), who suggested to extract triples using an OpenIE system (Mausam et al., 2012).We extracted triples by starting from a mention, then extract the predicate and the other argument.An OpenIE system does not easily provide this ability.Our Gigaword counts are gathered also in a way similar to what has been proposed in Chambers and Jurafsky ( 2009), but we gather much larger amounts of data.

Wikipedia Disambiguated Co-occurence
One of the problems with blindly extracting triple counts is that we may miss important semantic information.To address this issue, we use the publicly avaiable Illinois Wikifier (Cheng and Roth, 2013;Ratinov et al., 2011), a system that disambiguates mentions by mapping them into correct Wikipedia pages, to process the Wikipedia data.We then extract from the Wikipedia text all entities, verbs and nouns, and gather co-occurrence statistics with these syntactic variations: 1) immediately after 2) immediately before 3) before 4) after.For each of these variations, we get the probability and count9 of a pair of words (e.g.probability10 /count for "bend" immediately following "limb") as separate dimensions of the score vector.
Given the co-occurrence information, we get a score vector S wiki (u, v) corresponding to Type 1 Predicate Schemas, and hence S(u, v) wiki ≡ S(pred v (m = u, a = a v )).

Web Search Query Count
Our third source of score vectors is web queries that we implement using Google queries.We extract a score vector S web (u, v) ≡ S(pred v (m = u, a = a v )) (Type 1 Predicate Schemas) by querying for 1) "u a v " 2) "u pred v " 3) "u pred v a v " 4) "a v u"11 .For each variation of nouns (plural and singular) and verbs (different tenses) we create a different query and average the counts over all queries.Concatenating the counts (each is a separate dimension) would give us the score vector S web (u, v).

Polarity of Context
Another rich source of information is the polarity of context, which has been previously used for Winograd schema problems (Rahman and Ng, 2012).
Here we use a slightly modified version.The polarity scores are used for Type 1 Predicate Schemas and therefore we want to get S pol (u, v) ≡ S(pred v (m = u, a = a v )).We first extract polarity values for Po(pred u ) and Po(pred v ) by repeating the following procedures for each of them: • We extract initial polarity information given the predicate (using the data provided by Wilson et al. (2005)).• If the role of the mention is object, we negate its polarity.• If there is a polarity-reversing discourse connective (such as "but") preceding the predicate, we reverse the polarity.• If there is a negative comparative adverb (such as "less", "lower") we reverse the polarity.We give the total number of mentions and pronouns, while the number of predictions for pronoun is specific for the test data.We added 746 mentions (709 among them are pronouns) to WinoCoref compared to Winograd.
Given the polarity values Po(pred u ) and Po(pred v ), we construct the score vector S pol (u, v) following Table 4.

Experiments
In this section, we evaluate our system for both hard coreference problems and general coreference problems, and provide detailed anaylsis on the impact of our proposed Predicate Schemas.Since we treat resolving hard pronouns as part of the general coreference problems, we extend the Winograd dataset with a more complete annotation to get a new dataset.We evaluate our system on both datasets, and show significant improvemnt over the baseline system and over the results reported in Rahman and Ng (2012).Moreover, we show that, at the same time, our system achieves the state-of-art performance on standard coreference datasets.

Experimental Setup
Datasets: Since we aim to solve hard coreference problems, we choose to test our system on the Winograd dataset 12 (Rahman and Ng, 2012).It is a challenging pronoun resolution dataset which consists of sentence pairs based on Winograd schemas.The original annotation only specifies one pronoun and two entites in each sentence, and it is considered as a binary decision for each pronoun.As our target is to model and solve them as general coreference problems, we expand the annotation to include all pronouns and their linked entities as mentions (We call this new re-annotated dataset WinoCoref 13 ).Ex.3 in Section 1 is from the Winograd dataset.It originally only specifies he as the pronoun in question, and we added him and his as additional target pronouns.We also use two standard coreference resolution We inject Predicate Schemas knowledge as mentionpair features and retrain the system (KnowFeat).
We use the original coreference model and Predicate Schemas knowledge as constraints during inference (KnowCons).We also have a combined system (KnowComb), which uses the schema knowledge to add features for learning as well as constraints for inference.A summary of all systems is provided in Table 6.Evaluation Metrics: When evaluating on the full datasets of ACE and OntoNotes, we use the widely recoginzed metrics MUC (Vilain et al., 1995), BCUB (Bagga and Baldwin, 1998), Entity-based CEAF (CEAF e ) (Luo, 2005) and their average.As Winograd is a pronoun resolution dataset, we use precision as the evaluation metric.Although WinoCoref is more general, each coreferent cluster only contains 2-4 mentions and all are within the same sentence.Since traditional coreference metrics cannot serve as good metrics, we extend the precision metric and design a new one called AntePre.Suppose there are k pronouns in the dataset, and each pronoun has n 1 , n 2 , • • • , n k antecedents, respectively.We can view predicted coreference clusters as binary decisions on each antecedent-pronoun pair (linked or not).The total number of binary decisions is k i=1 n i .We then meaure how many binary decisions among them are correct; let m be the number of correct decisions, then AntePre is computed as:

Results for Hard Coreference Problems
Performance results on Winograd and WinoCoref datasets are shown in Table 7.The best performing system is KnowComb.It improves by over 20% over a state-of-art general coreference system on Winograd and also outperforms Rahman and Ng (2012) by a margin of 3.3%.On the WinoCoref dataset, it improves by 15%.These results show significant performance improvement by using Predicate Schemas knowledge on hard coreference problems.Note that the system developed in Rahman and Ng (2012) cannot be used on the WinoCoref dataset.The results also show that it is better to compile knowledge into constraints when the knowledge quality is high than add them as features.

Results for Standard Coreference Problems
Performance results on standard ACE and OntoNotes datasets are shown in KnowComb system achieves the same level of performance as does the state-of-art general coreference system we base it on.As hard coreference problems are rare in standard coreference datasets, we do not have significant performance improvement.However, these results show that our additional Predicate Schemas do not harm the predictions for regular mentions.

Detailed Analysis
To study the coverage of our Predicate Schemas knowledge, we label the instances in Winograd (which also applies to WinoCoref ) with the type of Predicate Schemas knowledge required.The distribution of the instances is shown in Table 9.Our proposed Predicate Schemas cover 73% of the instances.
We also provide an ablation study on the WinoCoref.The first line specifies the preformance for KnowComb with only Type 1 schema knowledge tested on all data while the third line specifies the preformance using the same model but tested on Cat1 data.The second line specifies the preformance results for KnowComb system with only Type 2 schema knowledge on all data while the fourth line specifies the preformance using the same model but tested on Cat2 data.
WinoCoref dataset in Table 10.These results use the best performing KnowComb system.They show that both Type 1 and Type 2 schema knowledge have higher precision on Category 1 and Category 2 data instances, respectively, compared to that on full data.Type 1 and Type 2 knowledge have similiar performance on full data, but the results show that it is harder to solve instances in category 2 than those in category 1.Also, the performance drop between Cat1/Cat2 and full data indicates that there is a need to design more complicated knowledge schemas and to refine the knowledge acquisition for further performance improvement.

Related Work
Winograd Schema: Winograd (1972) showed that small changes in context could completely change coreference decisions.Levesque et al. (2011) proposed to assemble a set of sentences which comply with Winograd's schema.Specifically, there are pairs of sentences which are identical except for minor differences which lead to different references of the same pronoun in both sentences.These references can be easily solved by humans, but are hard, he claimed, for computer programs.Anaphora Resolution: There has been a lot of work on anaphora resolution in the past two decades.Many of the early rule-based systems like Hobbs (1978) and Lappin and Leass (1994) gained considerable popularity.The early designs were easy to understand and the rules were designed manually.
With the development of machine learning based models (Connolly et al., 1994;Soon et al., 2001b;Ng and Cardie, 2002a), attention shifted to solving standard coreference resolution problems.However, many hard coreference problems involve pronouns.As Winograd's schema shows, there is still a need for further investigation in this subarea.World Knowledge Acquisition: Many tasks in NLP (such as Textual Entailment, Question Answering, etc.) require World Knowledge.Although there are many existing works on acquiring them (Schwartz and Gomez, 2009;Balasubramanian et al., 2013;Tandon et al., 2014), there is still no consensus on how to represent, gather and utilize high quality World Knowledge.When it comes to coreference resolution, there are a handful of works which either use web query information or apply alignment to an external knowledge base (Rahman and Ng, 2011b;Kobdani et al., 2011;Ratinov and Roth, 2012;Bansal and Klein, 2012;Zheng et al., 2013).With the introduction of Predicate Schema, our goal is to bring these different approaches together and provide a coherent view.
Table 1, the predicate of e 1 and e 2

Table 3 :
Possible variations for scoring function statistics.Here * indicates that the argument is dropped.

Table 4 :
Extrating the polarity score given polarity information of a mention-pair (u, v).To be brief, we use the shorthand notation p v pred v and p u pred u .1{•} is an indicator function.spol(u, v) is a binary vector of size three.therefore,etc.)if there are any.To avoid sparsity, we only keep the mention roles (only subj or obj; no exact strings are kept).Two triple-pairs are considered different if they have different predicates, different roles, different coreferred argument-pairs, or different discourse connectives.The co-occurrence counts extracted in this form correspond to Type 2 schemas in Table2.During inference, we match a Type 2 schema for S

Table 7 :
Performance results on Winograd and WinoCoref datasets.All our three systems are trained on WinoCoref, and we evaluate the predictions on both datasets.Our systems improve over the baselines by over than 20% on Winograd and over 15% on WinoCoref.

Table 8 :
Performance results on ACE and OntoNotes datasets.Our system gets the same level of performance compared to a state-of-art general coreference system.

Table 9 :
Distribution of instances in Winograd dataset of each category.Cat1/Cat2 is the subset of instances that require Type 1/Type 2 schema knowledge, respectively.All other instances are put into Cat3.Cat1 and Cat2 instances can be covered by our proposed Predicate Schemas.

Table 10 :
Ablation Study of Knowledge Schemas on