%0 Journal Article %T What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions? %A de Lhoneux, Miryam %A Stymne, Sara %A Nivre, Joakim %J Computational Linguistics %D 2020 %8 December %V 46 %N 4 %F de-lhoneux-etal-2020-lstms %X There is a growing interest in investigating what neural NLP models learn about language. A prominent open question is the question of whether or not it is necessary to model hierarchical structure. We present a linguistic investigation of a neural parser adding insights to this question. We look at transitivity and agreement information of auxiliary verb constructions (AVCs) in comparison to finite main verbs (FMVs). This comparison is motivated by theoretical work in dependency grammar and in particular the work of Tesnière (1959), where AVCs and FMVs are both instances of a nucleus, the basic unit of syntax. An AVC is a dissociated nucleus; it consists of at least two words, and an FMV is its non-dissociated counterpart, consisting of exactly one word. We suggest that the representation of AVCs and FMVs should capture similar information. We use diagnostic classifiers to probe agreement and transitivity information in vectors learned by a transition-based neural parser in four typologically different languages. We find that the parser learns different information about AVCs and FMVs if only sequential models (BiLSTMs) are used in the architecture but similar information when a recursive layer is used. We find explanations for why this is the case by looking closely at how information is learned in the network and looking at what happens with different dependency representations of AVCs. We conclude that there may be benefits to using a recursive layer in dependency parsing and that we have not yet found the best way to integrate it in our parsers. %R 10.1162/coli_a_00392 %U https://aclanthology.org/2020.cl-4.3 %U https://doi.org/10.1162/coli_a_00392 %P 763-784