Bridging Anaphora Resolution as Question Answering

Yufang Hou


Abstract
Most previous studies on bridging anaphora resolution (Poesio et al., 2004; Hou et al., 2013b; Hou, 2018a) use the pairwise model to tackle the problem and assume that the gold mention information is given. In this paper, we cast bridging anaphora resolution as question answering based on context. This allows us to find the antecedent for a given anaphor without knowing any gold mention information (except the anaphor itself). We present a question answering framework (BARQA) for this task, which leverages the power of transfer learning. Furthermore, we propose a novel method to generate a large amount of “quasi-bridging” training data. We show that our model pre-trained on this dataset and fine-tuned on a small amount of in-domain dataset achieves new state-of-the-art results for bridging anaphora resolution on two bridging corpora (ISNotes (Markert et al., 2012) and BASHI (Ro ̈siger, 2018)).
Anthology ID:
2020.acl-main.132
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1428–1438
Language:
URL:
https://aclanthology.org/2020.acl-main.132
DOI:
10.18653/v1/2020.acl-main.132
Bibkey:
Cite (ACL):
Yufang Hou. 2020. Bridging Anaphora Resolution as Question Answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1428–1438, Online. Association for Computational Linguistics.
Cite (Informal):
Bridging Anaphora Resolution as Question Answering (Hou, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.132.pdf
Video:
 http://slideslive.com/38928717
Code
 IBM/bridging-resolution
Data
WSC