Plato: A Selective Context Model for Entity Resolution

Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, Fernando Pereira


Abstract
We present Plato, a probabilistic model for entity resolution that includes a novel approach for handling noisy or uninformative features, and supplements labeled training data derived from Wikipedia with a very large unlabeled text corpus. Training and inference in the proposed model can easily be distributed across many servers, allowing it to scale to over 107 entities. We evaluate Plato on three standard datasets for entity resolution. Our approach achieves the best results to-date on TAC KBP 2011 and is highly competitive on both the CoNLL 2003 and TAC KBP 2012 datasets.
Anthology ID:
Q15-1036
Volume:
Transactions of the Association for Computational Linguistics, Volume 3
Month:
Year:
2015
Address:
Cambridge, MA
Editors:
Michael Collins, Lillian Lee
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
503–515
Language:
URL:
https://aclanthology.org/Q15-1036
DOI:
10.1162/tacl_a_00154
Bibkey:
Cite (ACL):
Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, and Fernando Pereira. 2015. Plato: A Selective Context Model for Entity Resolution. Transactions of the Association for Computational Linguistics, 3:503–515.
Cite (Informal):
Plato: A Selective Context Model for Entity Resolution (Lazic et al., TACL 2015)
Copy Citation:
PDF:
https://aclanthology.org/Q15-1036.pdf