%0 Conference Proceedings %T Reducing Disambiguation Biases in NMT by Leveraging Explicit Word Sense Information %A Campolungo, Niccolò %A Pasini, Tommaso %A Emelin, Denis %A Navigli, Roberto %Y Carpuat, Marine %Y de Marneffe, Marie-Catherine %Y Meza Ruiz, Ivan Vladimir %S Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, United States %F campolungo-etal-2022-reducing %X Recent studies have shed some light on a common pitfall of Neural Machine Translation (NMT) models, stemming from their struggle to disambiguate polysemous words without lapsing into their most frequently occurring senses in the training corpus. In this paper, we first provide a novel approach for automatically creating high-precision sense-annotated parallel corpora, and then put forward a specifically tailored fine-tuning strategy for exploiting these sense annotations during training without introducing any additional requirement at inference time. The use of explicit senses proved to be beneficial to reduce the disambiguation bias of a baseline NMT model, while, at the same time, leading our system to attain higher BLEU scores than its vanilla counterpart in 3 language pairs. %R 10.18653/v1/2022.naacl-main.355 %U https://aclanthology.org/2022.naacl-main.355 %U https://doi.org/10.18653/v1/2022.naacl-main.355 %P 4824-4838