On GAP Coreference Resolution Shared Task: Insights from the 3rd Place Solution

Artem Abzaliev


Abstract
This paper presents the 3rd-place-winning solution to the GAP coreference resolution shared task. The approach adopted consists of two key components: fine-tuning the BERT language representation model (Devlin et al., 2018) and the usage of external datasets during the training process. The model uses hidden states from the intermediate BERT layers instead of the last layer. The resulting system almost eliminates the difference in log loss per gender during the cross-validation, while providing high performance.
Anthology ID:
W19-3816
Volume:
Proceedings of the First Workshop on Gender Bias in Natural Language Processing
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Marta R. Costa-jussà, Christian Hardmeier, Will Radford, Kellie Webster
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
107–112
Language:
URL:
https://aclanthology.org/W19-3816
DOI:
10.18653/v1/W19-3816
Bibkey:
Cite (ACL):
Artem Abzaliev. 2019. On GAP Coreference Resolution Shared Task: Insights from the 3rd Place Solution. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 107–112, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
On GAP Coreference Resolution Shared Task: Insights from the 3rd Place Solution (Abzaliev, GeBNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-3816.pdf
Data
GAP Coreference DatasetWinoBias