Higher-Order Coreference Resolution with Coarse-to-Fine Inference

Kenton Lee, Luheng He, Luke Zettlemoyer


Abstract
We introduce a fully-differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.
Anthology ID:
N18-2108
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marilyn Walker, Heng Ji, Amanda Stent
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
687–692
Language:
URL:
https://aclanthology.org/N18-2108
DOI:
10.18653/v1/N18-2108
Bibkey:
Cite (ACL):
Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-Order Coreference Resolution with Coarse-to-Fine Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Higher-Order Coreference Resolution with Coarse-to-Fine Inference (Lee et al., NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-2108.pdf
Video:
 https://aclanthology.org/N18-2108.mp4
Code
 kentonl/e2e-coref +  additional community code
Data
CoNLLCoNLL-2012OntoNotes 5.0