Comparing Top-Down and Bottom-Up Neural Generative Dependency Models

Austin Matthews, Graham Neubig, Chris Dyer


Abstract
Recurrent neural network grammars generate sentences using phrase-structure syntax and perform very well on both parsing and language modeling. To explore whether generative dependency models are similarly effective, we propose two new generative models of dependency syntax. Both models use recurrent neural nets to avoid making explicit independence assumptions, but they differ in the order used to construct the trees: one builds the tree bottom-up and the other top-down, which profoundly changes the estimation problem faced by the learner. We evaluate the two models on three typologically different languages: English, Arabic, and Japanese. While both generative models improve parsing performance over a discriminative baseline, they are significantly less effective than non-syntactic LSTM language models. Surprisingly, little difference between the construction orders is observed for either parsing or language modeling.
Anthology ID:
K19-1022
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
227–237
Language:
URL:
https://aclanthology.org/K19-1022
DOI:
10.18653/v1/K19-1022
Bibkey:
Cite (ACL):
Austin Matthews, Graham Neubig, and Chris Dyer. 2019. Comparing Top-Down and Bottom-Up Neural Generative Dependency Models. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 227–237, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Comparing Top-Down and Bottom-Up Neural Generative Dependency Models (Matthews et al., CoNLL 2019)
Copy Citation:
PDF:
https://aclanthology.org/K19-1022.pdf