Predicting Human Translation Difficulty with Neural Machine Translation

Zheng Wei Lim, Ekaterina Vylomova, Charles Kemp, Trevor Cohn


Abstract
Human translators linger on some words and phrases more than others, and predicting this variation is a step towards explaining the underlying cognitive processes. Using data from the CRITT Translation Process Research Database, we evaluate the extent to which surprisal and attentional features derived from a Neural Machine Translation (NMT) model account for reading and production times of human translators. We find that surprisal and attention are complementary predictors of translation difficulty, and that surprisal derived from a NMT model is the single most successful predictor of production duration. Our analyses draw on data from hundreds of translators operating across 13 language pairs, and represent the most comprehensive investigation of human translation difficulty to date.
Anthology ID:
2024.tacl-1.81
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1479–1496
Language:
URL:
https://aclanthology.org/2024.tacl-1.81/
DOI:
10.1162/tacl_a_00714
Bibkey:
Cite (ACL):
Zheng Wei Lim, Ekaterina Vylomova, Charles Kemp, and Trevor Cohn. 2024. Predicting Human Translation Difficulty with Neural Machine Translation. Transactions of the Association for Computational Linguistics, 12:1479–1496.
Cite (Informal):
Predicting Human Translation Difficulty with Neural Machine Translation (Lim et al., TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.81.pdf