%0 Conference Proceedings %T End-to-end Deep Reinforcement Learning Based Coreference Resolution %A Fei, Hongliang %A Li, Xu %A Li, Dingcheng %A Li, Ping %Y Korhonen, Anna %Y Traum, David %Y Màrquez, Lluís %S Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics %D 2019 %8 July %I Association for Computational Linguistics %C Florence, Italy %F fei-etal-2019-end %X Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higher-order mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy regularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark. %R 10.18653/v1/P19-1064 %U https://aclanthology.org/P19-1064 %U https://doi.org/10.18653/v1/P19-1064 %P 660-665