Explicitly modeling case improves neural dependency parsing

Clara Vania, Adam Lopez


Abstract
Neural dependency parsing models that compose word representations from characters can presumably exploit morphosyntax when making attachment decisions. How much do they know about morphology? We investigate how well they handle morphological case, which is important for parsing. Our experiments on Czech, German and Russian suggest that adding explicit morphological case—either oracle or predicted—improves neural dependency parsing, indicating that the learned representations in these models do not fully encode the morphological knowledge that they need, and can still benefit from targeted forms of explicit linguistic modeling.
Anthology ID:
W18-5447
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
356–358
Language:
URL:
https://aclanthology.org/W18-5447
DOI:
10.18653/v1/W18-5447
Bibkey:
Cite (ACL):
Clara Vania and Adam Lopez. 2018. Explicitly modeling case improves neural dependency parsing. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 356–358, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Explicitly modeling case improves neural dependency parsing (Vania & Lopez, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5447.pdf