%0 Conference Proceedings %T Towards Modeling the Style of Translators in Neural Machine Translation %A Wang, Yue %A Hoang, Cuong %A Federico, Marcello %Y Toutanova, Kristina %Y Rumshisky, Anna %Y Zettlemoyer, Luke %Y Hakkani-Tur, Dilek %Y Beltagy, Iz %Y Bethard, Steven %Y Cotterell, Ryan %Y Chakraborty, Tanmoy %Y Zhou, Yichao %S Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2021 %8 June %I Association for Computational Linguistics %C Online %F wang-etal-2021-towards %X One key ingredient of neural machine translation is the use of large datasets from different domains and resources (e.g. Europarl, TED talks). These datasets contain documents translated by professional translators using different but consistent translation styles. Despite that, the model is usually trained in a way that neither explicitly captures the variety of translation styles present in the data nor translates new data in different and controllable styles. In this work, we investigate methods to augment the state of the art Transformer model with translator information that is available in part of the training data. We show that our style-augmented translation models are able to capture the style variations of translators and to generate translations with different styles on new data. Indeed, the generated variations differ significantly, up to +4.5 BLEU score difference. Despite that, human evaluation confirms that the translations are of the same quality. %R 10.18653/v1/2021.naacl-main.94 %U https://aclanthology.org/2021.naacl-main.94 %U https://doi.org/10.18653/v1/2021.naacl-main.94 %P 1193-1199