%0 Conference Proceedings %T pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference %A Joshi, Mandar %A Choi, Eunsol %A Levy, Omer %A Weld, Daniel %A Zettlemoyer, Luke %Y Burstein, Jill %Y Doran, Christy %Y Solorio, Thamar %S Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) %D 2019 %8 June %I Association for Computational Linguistics %C Minneapolis, Minnesota %F joshi-etal-2019-pair2vec %X Reasoning about implied relationships (e.g. paraphrastic, common sense, encyclopedic) between pairs of words is crucial for many cross-sentence inference problems. This paper proposes new methods for learning and using embeddings of word pairs that implicitly represent background knowledge about such relationships. Our pairwise embeddings are computed as a compositional function of each word’s representation, which is learned by maximizing the pointwise mutual information (PMI) with the contexts in which the the two words co-occur. We add these representations to the cross-sentence attention layer of existing inference models (e.g. BiDAF for QA, ESIM for NLI), instead of extending or replacing existing word embeddings. Experiments show a gain of 2.7% on the recently released SQuAD 2.0 and 1.3% on MultiNLI. Our representations also aid in better generalization with gains of around 6-7% on adversarial SQuAD datasets, and 8.8% on the adversarial entailment test set by Glockner et al. (2018). %R 10.18653/v1/N19-1362 %U https://aclanthology.org/N19-1362 %U https://doi.org/10.18653/v1/N19-1362 %P 3597-3608