Ellipsis Resolution as Question Answering: An Evaluation

Rahul Aralikatte, Matthew Lamm, Daniel Hardt, Anders Søgaard


Abstract
Most, if not all forms of ellipsis (e.g., so does Mary) are similar to reading comprehension questions (what does Mary do), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F1).
Anthology ID:
2021.eacl-main.68
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
810–817
Language:
URL:
https://aclanthology.org/2021.eacl-main.68
DOI:
10.18653/v1/2021.eacl-main.68
Bibkey:
Cite (ACL):
Rahul Aralikatte, Matthew Lamm, Daniel Hardt, and Anders Søgaard. 2021. Ellipsis Resolution as Question Answering: An Evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 810–817, Online. Association for Computational Linguistics.
Cite (Informal):
Ellipsis Resolution as Question Answering: An Evaluation (Aralikatte et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.68.pdf
Code
 rahular/ellipsis-baselines
Data
WikiCorefdecaNLP