%0 Conference Proceedings %T When classifying grammatical role, BERT doesn’t care about word order... except when it matters %A Papadimitriou, Isabel %A Futrell, Richard %A Mahowald, Kyle %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F papadimitriou-etal-2022-classifying-grammatical %X Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language. For example, the words chopped, chef, and onion are more likely used to convey “The chef chopped the onion,” not “The onion chopped the chef.” Recent work has shown large language models to be surprisingly word order invariant, but crucially has largely considered natural prototypical inputs, where compositional meaning mostly matches lexical expectations. To overcome this confound, we probe grammatical role representation in English BERT and GPT-2, on instances where lexical expectations are not sufficient, and word order knowledge is necessary for correct classification. Such non-prototypical instances are naturally occurring English sentences with inanimate subjects or animate objects, or sentences where we systematically swap the arguments to make sentences like “The onion chopped the chef”. We find that, while early layer embeddings are largely lexical, word order is in fact crucial in defining the later-layer representations of words in semantically non-prototypical positions. Our experiments isolate the effect of word order on the contextualization process, and highlight how models use context in the uncommon, but critical, instances where it matters. %R 10.18653/v1/2022.acl-short.71 %U https://aclanthology.org/2022.acl-short.71 %U https://doi.org/10.18653/v1/2022.acl-short.71 %P 636-643