%0 Conference Proceedings %T Passing Parser Uncertainty to the Transformer: Labeled Dependency Distributions for Neural Machine Translation %A Pu, Dongqi %A Sima’an, Khalil %Y Moniz, Helena %Y Macken, Lieve %Y Rufener, Andrew %Y Barrault, Loïc %Y Costa-jussà, Marta R. %Y Declercq, Christophe %Y Koponen, Maarit %Y Kemp, Ellie %Y Pilos, Spyridon %Y Forcada, Mikel L. %Y Scarton, Carolina %Y Van den Bogaert, Joachim %Y Daems, Joke %Y Tezcan, Arda %Y Vanroy, Bram %Y Fonteyne, Margot %S Proceedings of the 23rd Annual Conference of the European Association for Machine Translation %D 2022 %8 June %I European Association for Machine Translation %C Ghent, Belgium %F pu-simaan-2022-passing %X Existing syntax-enriched neural machine translation (NMT) models work either with the single most-likely unlabeled parse or the set of n-best unlabeled parses coming out of an external parser. Passing a single or n-best parses to the NMT model risks propagating parse errors. Furthermore, unlabeled parses represent only syntactic groupings without their linguistically relevant categories. In this paper we explore the question: Does passing both parser uncertainty and labeled syntactic knowledge to the Transformer improve its translation performance? This paper contributes a novel method for infusing the whole labeled dependency distributions (LDD) of the source sentence’s dependency forest into the self-attention mechanism of the encoder of the Transformer. A range of experimental results on three language pairs demonstrate that the proposed approach outperforms both the vanilla Transformer as well as the single best-parse Transformer model across several evaluation metrics. %U https://aclanthology.org/2022.eamt-1.7 %P 41-50