DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages

We introduce DivEMT, the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times and pauses were recorded, enabling an in-depth, cross-lingual evaluation of NMT quality and post-editing effectiveness. Using this new dataset, we assess the impact of two state-of-the-art NMT systems, Google Translate and the multilingual mBART-50 model, on translation productivity. We find that post-editing is consistently faster than translation from scratch. However, the magnitude of productivity gains varies widely across systems and languages, highlighting major disparities in post-editing effectiveness for languages at different degrees of typological relatedness to English, even when controlling for system architecture and training data size. We publicly release the complete dataset including all collected behavioral data, to foster new research on the translation capabilities of NMT systems for typologically diverse languages.


Introduction
Recent advances in neural language modeling and multilingual training have prompted a widespread adoption of machine translation (MT) technologies across an unprecedented range of world languages.While the benefits of state-of-the-art MT for crosslingual information access are undisputed (Lommel and Pielmeier, 2021), its usefulness as an aid to professional translators varies considerably across domains, subjects and language combinations (Zouhar et al., 2021).In the last decade, the MT community has been including an increasing number of languages in its automatic and human evaluation efforts (Bojar et al., 2013;Barrault et al., 2021).However, the results of these evaluations are typically not directly comparable across different language pairs for various reasons.First, referencebased automatic quality metrics are hardly comparable across different target languages (Bugliarello et al., 2020).Secondly, human judgments are collected independently for different language pairs, making their cross-lingual comparison vulnerable to confounding factors such as tested domains and training data sizes.Similarly, recent work on NMT post-editing efficiency has focused on specific language pairs such as English-Czech (Zouhar et al., 2021), German-Italian, German-French (Läubli et al., 2019) and English-Hindi (Ahsan et al., 2021), but a controlled comparison across a set of typologically diverse languages is still lacking.
In this work, we assess the usefulness of stateof-the-art NMT in professional translation with a strictly controlled cross-language setup (Figure 1).Specifically, professionals were asked to translate the same English documents into six typologically different languages (Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese) using the same platform and guidelines.Three translation modalities were adopted: human translation from scratch (HT), post-editing of Google Translate's translation (PE 1 ), and post-editing of mBART-50's translation (PE 2 ), the latter being a state-of-the-art opensource, multilingual NMT system.In addition to post-editing results, subjects' fine-grained editing behavior -including keystrokes and time information -was logged to measure productivity and effort across languages, systems and translation modalities.Finally, translators were asked to complete a qualitative assessment regarding their perceptions of MT quality and post-editing effort.The resulting DivEMT dataset is to our best knowledge the first public resource allow- ing a direct comparison of professional translators' productivity and fine-grained editing information across a set of typologically-diverse languages.DivEMT is publicly released alongside this paper as a unique resource to study the language-and system-dependent nature of NMT advances in realworld translation scenarios.

Related Work
Cross-lingual MT Evaluation Before the advent of NMT, Birch et al. (2008) studied how various language properties affected the quality of Statistical MT (SMT) across a sizeable sample of European language pairs.The comparison, however, was solely based on BLEU, which is in fact not comparable across different target languages (Bugliarello et al., 2020).Recent work on neural models introduced more principled ways to measure the intrinsic difficulty of language-modeling (Gerz et al., 2018;Cotterell et al., 2018;Mielke et al., 2019) and machine-translating (Bugliarello et al., 2020;Bisazza et al., 2021) different languages.However, achieving this reliably without any human evaluation remains an open research question.Human evaluations of MT quality are routinely conducted during campaigns such as WMT (Koehn and Monz, 2006;Akhbardeh et al., 2021) and IWSLT (Cettolo et al., 2016(Cettolo et al., , 2017) ) among others, but their focus is on language-and domain-specific ranking of MT systems -often leveraging non-professional annotators (Freitag et al., 2021) -rather than crosslingual quality comparisons.Concurrently to this work, Licht et al. (2022) proposed a new human evaluation protocol to improve consistency in crosslingual MT quality assessment.
Post-editing NMT Measuring post-editing effort across its temporal, cognitive, and technical dimensions (Krings, 2001) is a well-established way to assess the effectiveness and efficiency of MT as a component of specialized translation workflows.Seminal post-editing studies highlighted an increase in translators' productivity following MT adoption (Guerberof, 2009;Green et al., 2013;Läubli et al., 2013;Plitt and Masselot, 2010;Parra Escartín and Arcedillo, 2015).However, they also struggled to identify generalizable findings due to confounding factors like output quality, content domains, and high variance across language pairs and human subjects.With the advent of NMT, productivity gains of the new approach were extensively compared to those of SMT, the highly-customized dominant paradigm at the time (Castilho et al., 2017;Bentivogli et al., 2016;Toral et al., 2018;Läubli et al., 2019).Initial results were promising for NMT due to its better fluency and overall results.Moreover, translators were shown to prefer NMT over SMT for postediting, although a pronounced productivity increase was not always present.More recent work highlighted the productivity gains driven by NMT post-editing in a wider array of languages that were previously challenging for MT, such as English-Dutch (Daems et al., 2017), English-Hindi (Ahsan et al., 2021), English-Greek (Stasimioti and Sosoni, 2020), English-Finnish and English-Swedish (Koponen et al., 2020), all showing a considerable variance among language pairs and subjects.Interestingly, Zouhar et al. (2021) found NMT postediting speed to be comparable to translation from scratch in English-Czech, and highlighted a disconnect between moderate increases in automatic MT quality metrics and better post-editing productivity.In sum, research on post-editing NMT generally reports increased fluency and output quality, but productivity gains are hardly generalizable across language pairs and domains.Importantly, to our knowledge, no previous work has studied NMT post-editing over a set of typologically different languages while controlling for the effects of content types and domains, NMT engines, and translation interfaces.

The DivEMT Dataset
DivEMT's main purpose is to assess the usefulness of state-of-the-art NMT for professional translators and to study how this usefulness varies across target languages with different typological properties.We present below our data collection setup, which strikes a balance between simulating a realistic professional translation workflow and maximizing the comparability of results across languages.

Subjects and Task Scheduling
To control for the effect of individual translators' preferences and styles, we involve a total of 18 subjects (three per target language).During the experiment, each subject receives a series of short documents (3 to 5 sentences each) where the source text is presented in isolation (HT) or alongside a translation proposal produced by one of the NMT systems (PE 1 , PE 2 ).The experiment comprises two phases: During the warm-up phase a set of 5 documents is translated by all subjects following the same, randomly sampled sequence of modalities (HT, PE 1 or PE 2 ).This phase allows the subjects to get used to the setup and enables us to spot possible issues in the logged behavioral data before moving forward. 2 In the main collection phase, each subject is asked to translate documents in a pseudo-random sequence of modalities.This time, however, the sequence is different for each translator and chosen so that each document gets translated in all three modalities.This allows us to measure translation productivity independently from the subject's productivity and document-specific difficulties.A graphical overview of this process is shown in Figure 1, with additional details given in Appendix A. As productivity and other behavioral metrics can only be estimated with a sizable sample, we prioritize the number of documents over the number of subjects per language during budget allocation.A 2 Warm-up data are excluded from the analysis of Section 4. larger set of post-edited documents also provides more insight in the error type distribution of NMT systems across different language pairs, an analysis which we leave to future work.
All subjects are professional translators with at least 3 years of professional experience, at least one year of post-editing experience and strong proficiency with CAT tools. 3Translators were provided with links to the source articles to facilitate contextualization, were asked to produce translations of publishable quality and were instructed not to use any external MT engine to produce their translations.Assessing the final quality of the postedited material is out of the scope of the current study, although we realize that this is an important consideration to assess usability in a professional context.A summary of our translation guidelines is provided in Appendix C.

Choice of Source Texts
The selected documents represent a subset of the FLORES-101 benchmark (Goyal et al., 2022) consisting of sentences taken from English Wikipedia, and covering a mix of topics and domains. 4While professional translators generally specialize in one or a few domains, we opt for a mix-domain dataset to minimize domain adaptation efforts by the subjects and maximize the generalizability of our results.Importantly, FLORES-101 includes highquality human translations into 101 languages, which makes it possible to automatically estimate NMT quality and discard excessively low-scoring models or language pairs before our experiment.FLORES-101 also provides useful metadata, e.g.source URL, which allows us to ensure the absence of public translations of the selected contents, which could be leveraged by translators and compromise the validity of our setup.The documents used for our study are fragments of contiguous sentences extracted from Wikipedia articles that compose the original FLORES-101 corpus.Even if small, the context provided by document structure allows us to simulate a more realistic translation workflow if compared to out-of-context sentences.
Based on our available budget, we select 112 English documents from the devtest portion of FLORES-101 corresponding to 450 sentences and 9626 words.More details on the data selection process are provided in Appendix D.

Choice of Languages
Training data is among the most important factors in defining the quality of a NMT system.Unfortunately, using strictly comparable or multi-parallel datasets, like Europarl (Koehn, 2005) or the Bible corpus (Mayer and Cysouw, 2014), would dramatically restrict the diversity of languages available to our study, or imply a prohibitively low translation quality on general-domain text.In order to minimize the effect of training data disparity while maximizing language diversity, we choose representatives of six different language families for which comparable amounts of training data are available in our open-source model, namely Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese.
As shown in Table 1, our language sample ensures a good diversity in terms of language family and relatedness to English, type of morphological system, morphological complexity -measured by mean size of paradigm (MSP, Xanthos et al. 2011) -and script.We also report type-token ratio (TTR), the only language property that was found to correlate significantly with translation difficulty in a sample of European languages (Bugliarello et al., 2020).
While the amount of language-specific parallel sentence pairs used for the multilingual fine-tuning of mBART-50 varies widely (4K<N<45M), all our selected language pairs fall within the 100K-250K range (mid-resourced, see Table 2), enabling a fair cross-lingual performance comparison.

Choice of MT Systems
While most of the best-performing general-domain NMT systems are commercial, experiments based  2021) extend mBART by further pre-training on 25 new languages and performing multilingual translation fine-tuning for the full set of 50 languages, producing three configurations of multilingual NMT models: many-to-one, one-to-many, and many-to-many.Our choice of mBART-50 is largely motivated by its maneageable size, its good performances across the set of evaluated languages (see Table 2) and its adoption for other NMT (Liu et al., 2021) and postediting (Fomicheva et al., 2020) studies.Although mBART-50 performances are usually comparable or slightly worse than the ones of tested bilingual NMT models,7 using a multilingual model allows us to evaluate the downstream effectiveness of a single, unified system trained on pairs evenly distributed across tested languages.Finally, adopting two systems with marked differences in automatic evaluation scores allows us to estimate how a significant increase in metrics such as BLEU, CHRF and COMET (Papineni et al., 2002;Popović, 2015; ENG SRC Inland waterways can be a good theme to base a holiday around.Table 3: A DivEMT corpus entry, including the English source (SRC), its translation from scratch (HT), the MT output of mBART-50 (MT) and its post-edited version (PE) for all languages.We highlight insertions , deletions , substitutions and shifts computed with Tercom (Snover et al., 2006).Full examples available in Appendix F. Rei et al., 2020) impacts downstream productivity across languages in a realistic post-editing scenario.

Translation Platform and Collected Data
Translators were asked to use PET (Aziz et al., 2012), a computer-assisted translation tool that supports both translating from scratch and post-editing.This tool was chosen because (i) it logs information about the post-editing process, which we use to assess effort (see Section 4); and (ii) it is a mature research-oriented tool that has been successfully used in several previous studies (Koponen et al., 2012;Toral et al., 2018).The minimalistic nature of PET interface and functionalities limits its application in commercial translation activities, making it generally unfamiliar for professional translators.
We consider this aspect an advantage in light of our controlled setup since it allows us to avoid additional confounding effects or disparities stemming from tools-specific capabilities and different degrees of proficiency with the software.We also observe that, due to the varied and generic nature of the selected documents, functionalities such as concordance and translation memory matches would have proven much less useful in our setup.We collect three types of data: • Resulting translations produced by translators in either HT or PE modes, constituting a multilingual corpus with one source text and 18 translations (one per language-modality combination) exemplified in Table 3.
• Behavioral data for translated sentences, including editing time, amount and type of keystrokes (content, navigation, erase, etc.), and number and duration of pauses above 300/1000 milliseconds (Lacruz et al., 2014).
• Pre-and post-task questionnaire.The former focuses on demographics, education, and work experience with translation and postediting.The latter elicits subjective assessments of post-editing quality, effort and enjoyability compared to translating from scratch.We consider two main objective indicators of editing effort, namely temporal measurements (and related productivity gains) and post-editing rates, measured by the Human-targeted Translation Edit Rate (HTER, Snover et al. 2006).Finally, we assess the subjective perception of PE gains by examining the post-task questionnaires.We reiterate that all scores in this section are computed on the same set of source sentences for all languages, resulting in a faithful cross-lingual comparison of post-editing effort thanks to DivEMT's controlled setup.

Temporal Effort and Productivity Gains
We start by comparing task time (seconds per processed source word) across languages and modalities.For this purpose, edit times are computed for every document in every language without considering the presence of multiple translators for every language.As shown in Figure 2, translation time varies considerably across languages even when no MT system is involved (HT), suggesting an intrinsic variability in translation complexity for different subjects and language pairs.Indeed, for the HT modality, the time required for the 'slowest' target languages (Italian, Ukrainian) is roughly double the 'fastest' one (Turkish).This pattern cannot be easily explained and contrasts with factors commonly tied to MT complexity, such as source-target morphological richness and language relatedness (Birch et al., 2008;Belinkov et al., 2017)  For a measure of productivity gains that is easier to interpret and more in line with translation industry practices, we turn to productivity expressed in source words processed per minute and compute the speed-up induced by the two post-editing modalities over translating from scratch (∆HT).Table 4 presents our results.Across systems, we find that large differences among automatic MT quality metrics indeed reflect on post-editing effort, suggesting a nuanced picture that is complementary to the findings of Zouhar et al. (2021).While post-editing time gains were observed to quickly saturate for slight changes in high-quality MT, we find that moving from medium-quality to high-quality MT yields meaningful productivity improvements across most evaluated languages.Across languages, too, the magnitude of productivity gains ranges widely, from doubling in some languages (Dutch PE 1 , Italian PE 1 and PE 2 ) to only about 10% (Arabic, Turkish and Ukrainian PE 2 ).When only considering the better performing system (PE 1 ), post-editing remains clearly beneficial in all languages despite the high variability in ∆HT scores.Results are more nuanced for the open-source system (PE 2 ), with three out of six languages displaying only marginal gains (<15% in Arabic, Turkish and Ukrainian).Despite its overall lower performance, mBART-50 (PE 2 ) is the only system enabling a fair comparison across languages (from the point of view of training data size and architecture, see Section 3.4).Interestingly, if we focus on the gains induced by this system, factors like language relatedness and morphological complexity become relevant.Specifically, Italian (+95%), Dutch (+61%) and Ukrainian (+14%) are genetically and syntactically related to English, but Ukrainian has a richer morphology (see Table 1).On the other hand, Vietnamese (+23%), Turkish (+12%) and Arabic (+10%) all belong to different families.However, Vietnamese is isolating (little to no morphology), while Turkish and Arabic have very rich morphological systems (resp.agglutinative and introflexive, the latter of which is especially problematic for subword segmentation, Amrhein and Sennrich 2021).Other differences are however harder to explain.For instance, Dutch is closely related to English and has a simpler morphology than Italian, but its productivity gain with mBART-50 is lower (61% vs 95%).This finding is accompanied by an important gap in BLEU and COMET scores achieved by mBART-50 on the two languages (22.6 vs 24.4 BLEU and 0.532 vs 0.648 COMET for Dutch vs Italian, resp.) which cannot be explained by training data size.
In summary, our findings confirm the overall positive impact of NMT post-editing on translation productivity observed in previous PE studies.However, we note how the magnitude of this impact is highly variable across systems and languages, with inter-subject variability also playing an important role, in line with previous studies (Koponen et al., 2020) (see Section 6 for more details).The small size of our language sample does not allow us to draw direct causal links between specific typological properties and post-editing efficiency.That said, we believe these results have important implications on the claimed 'universality' of current state-of-the-art MT and NLP systems, mostly based on the Transformer architecture (Vaswani et al., 2017) and BPE-style subword segmentation techniques (Sennrich et al., 2016).

Modeling Temporal Effort
Given the high variability among translators, segments and translation modalities, we assess the validity of our observations via statistical analysis of temporal effort using a linear mixed-effects regression model (LMER, Lindstrom and Bates 1988), following Green et al. (2013) and Toral et al. (2018).We fit our model on n = 7434 instances, corresponding to 413 sentences translated by 18 translators8 , using translation time as the dependent variable.Our fixed predictors include translation modality, target language, their interaction and length of source segment in characters.9Our random effects structure includes random intercepts for different segments (nested with documents) and translators, as well as a random slope for modality over individual segments.10Table 5 presents the set of predictors included in the final model, an estimate of their impact on edit times and their significance.We find both PE modalities to significantly reduce translation times (p < 0.001), with PE 1 being significantly faster than PE 2 (p < 0.001) across all languages.Taking the language for which HT is slowest (Ukrainian) as the reference level, the reduction in time brought by Google is significantly more pronounced for Italian, Dutch (p < 0.001), and Turkish (p < 0.05).For mBART-50, however, we only observe significantly more pronounced increases in productivity for Italian and Dutch (p < 0.001) compared to the reference.We find these results to corroborate the observations of the previous section.

Post-Editing Rate
We proceed to study the post-editing patterns using the widely-adopted Human-targeted Translation Edit Rate (HTER, Snover et al. 2006), computed as the length-normalized sum of word-level substitutions, insertions, deletions and shift operations  performed during post-editing. 11 As shown in Figure 3, PE 1 required less editing than PE 2 for all languages, and a high variability is observed across the two systems and all languages.Since translators were not informed about the presence of two MT systems, we exclude the possibility that these results reflect an overreliance or distrust towards a specific MT system.For Google Translate, Ukrainian shows the heaviest edit rate, followed by Vietnamese, whereas Arabic, Dutch, Italian and Turkish all show relatively low amounts of edits.Focusing again on mBART-50 for a fairer cross-lingual comparison, Ukrainian is by far the most heavily edited language, followed by a medium-tier group composed of Vietnamese, Arabic and Turkish, and finally by Dutch and Italian as low-edit languages.Results show that several of our observations on the linguistic relatedness and type of morphology also apply to edit rates, with 11 See Appendix E for extra results with a character-level variant of HTER.

Arabic
Dutch Italian Turkish Ukrain.Vietnam.languages less related to English or having richer morphology requiring more post-edits on average.Figure 4 visualizes the large gap in edit rates across languages and subjects by presenting the amount of "errorless" MT sentences that were accepted directly, i.e. without any post-editing.We note again how the NMT system heavily influences the rate of occurrence of such sentences but nonetheless shows how Dutch and Italian generally present more errorless sentences than Ukrainian and Vietnamese.In particular, for Google Translate outputs, the average rate of error-less sentences is roughly 25% for the former target languages, while for the latter, it accounts only for the 3% of total translations.Surprisingly, the English-Turkish pair also fares well, despite the low source-target relatedness.
Finally, we note that post-editing effort appears to correlate poorly with the automatic MT quality metrics reported in Table 2 (e.g.see high scores of Vietnamese and low scores of Dutch PE 1 ), highlighting a difficulty in predicting the benefits of MT post-editing over HT for new language pairs.

Perception of Productivity Gain
We conclude our analysis by examining the posttask questionnaires, in which participants expressed their perception of MT quality and translation speed across HT and PE modalities (HT s , PE s )12 using a 1-7 Likert scale (1 slowest, 7 fastest).We use these to compute the Perceived Productivity Gain (PPG) as PPG = PE s − HT s and visualize it in Figure 5.We observe that Italian and Dutch, the only target languages with marked productivity gains (∆HT) regardless of the PE system in Table 4, are also the only ones having consistently high (≥ 2) PPG scores across all subjects.Moreover, we remark how PPG for target languages with a large gap in ∆HT scores between high-PE 1 and low-PE 2 (Arabic, Ukrainian) are hardly distinguishable from those of languages in which ∆HT is low for both PE systems (Turkish, Vietnamese).Notably, 4 out of 18 subjects attribute negative PPGs to the PE modality, even though productivity gains were reported across all subjects and languages.These results suggest that worstcase usage scenarios may play an important role in driving PPG, i.e. that subjects' perception of quality is largely shaped by particularly challenging or unsatisfying interactions with the NMT system, rather than the average case.Finally, from the post-task questionnaire, PPG scores exhibit a strong positive correlation with the perception of MT adequacy (ρ=0.66),fluency (ρ=0.46) and overall quality (ρ=0.69), and more generally with a higher enjoyability of PE (ρ=0.60), while being inversely correlated with the perception of problematic mistranslations (ρ=−0.60).

Conclusions
In this work we introduced DivEMT, the outcome of a post-editing study spanning two state-of-theart NMT systems, 18 professional translators and six typologically diverse target languages under a unified setup.
We leveraged DivEMT's behavioral data to perform a controlled cross-language analysis of NMT post-editing effort along its temporal and editing effort dimensions.The analysis reveals that NMT drives significant improvements in productivity across all the evaluated languages, but the magnitude of these improvements depends heavily on the language and the underlying NMT system.In this setting, productivity measurements across modalities were found to be generally consistent with the recorded editing patterns.Our results indicate that translators working on language pairs with significant post-editing productivity gains, on average, perform fewer edits and accept more machinegenerated translations without any editing.We also observed a disconnect between post-editing productivity gains and MT quality metrics collected for the same NMT systems.Finally, low source-language relatedness and target morphological complexity seem to hinder productivity when NMT is adopted, even in settings where system architecture and training data are controlled for.
In our qualitative analysis, translators' perception of post-editing usefulness was found to be strongly shaped by problematic mistranslations.Languages showing large productivity gains for both NMT systems were the only ones associated with a positive perception of PE-mediated gains, as opposed to mixed or negative opinions for other translation directions.
In future work, a more fine-grained analysis of the types of edits conducted by the translators, and their differences across languages, could shed more light on our current findings.

Limitations
The subjective component introduced by the presence of multiple translators is an important confounding factor in our setup, especially due to the relatively small number of subjects for each language.In our study, we tried to balance a thorough control of other noise components with a faithful reproduction of a realistic translation scenario.However, we realize that the combination of limited document context provided by FLORES-101, the variety of topics covered in the texts and the experimental nature of the PET platform constitutes an atypical setting that may have impacted the translators' natural productivity.Moreover, variability in the content of mBART-50 fine-tuning data, despite the comparable sizes, may have played a role in the variability observed for automatic MT evaluation and PE gains across languages.

Broader Impact and Ethical Considerations
This line of research aims at providing a more precise and faceted understanding of translation and editing effort across multiple languages, and as such is worth pursuing to ensure a fairer compensation to translators if compared to one-sizefits-all approaches based on automatic quality metrics.Furthermore, the understanding of the application of MT to translators' work in less researched languages and the diversity of measures obtained can give a clearer picture of MT usability, in its broader sense, than automatic metrics.It is relevant to test NMT models in controlled translation environments.In our experiment, Language Service Providers were paid their requested rate.All words were paid as new words, as the MT usability was unknown prior to the experiment.They were also given thorough instructions and ample time to complete the assignment, accommodating for the COVID-19 pandemic that affected some of the participants.Translators were informed that they could opt-out at any time and have their information deleted.

A Modality scheduling
Table 6 shows an example of the adopted modality scheduling.The modality of document docM i for translator T j in the main task is picked randomly among the two modalities that were not seen by the same translator for docM i−1 , enforcing consecutive documents given to the same translator to be assigned different modalities to avoid periodicity in repetition and enable the same-language comparisons of Section 4. Importantly, although all three modes were collected for every document, we did not enforce mode consistency across the same translator identifier across languages (i.e.T 1 for Italian does not have the same sequence of modalities of translator T 1 in Arabic, for example).For this reason, individual subjects are not directly comparable across languages.This is relevant since, e.g.T 3 for Dutch and Italian did not operate on the same set of sentences on the same modalities, and thus their comparable editing behavior in Figure 4 should be attributed to personal preference rather than an identical assignment of modalities on the same sentences.Despite modality scheduling, we have no guarantees that translators consistently follow the order of documents presented in PET, and thus possibly operate on documents assigned to the same modality consecutively.However, this possibility reduces to random guessing due to a lack of any identifying information related to the modality until the document is entered for editing.The sequence of modalities for the warmup task is fixed and is: HT, PE 2 , PE 1 , HT, PE 2 .

B Subject Information
During the setup of our experiment, one translator refused to carry out the main task after the warmup phase, and another was substituted by our choice.Both translators were working in the English-Italian direction and were found to make heavy usage of copy-pasting during the warmup stage, suggesting an incorrect utilization of the platform in light of our guidelines.Both translators, which we identified as T 2 and T 3 for Italian, were replaced by T 5 and T 4 respectively.Table 7 reflects the final translation selection for all languages, with the information collected by means of the pre-task questionnaire.

C Translation Guidelines
An extract of the translation guidelines provided to the translators follows.The full guidelines are  ).For the warmup task (N = 5), all translators are provided with the same documents in the same modalities.For the main task (N = 107), each translator is assigned a modality at random.Each document is translated once for every modality.The same procedure is repeated independently for all the languages.
provided in the additional materials.
Fill in the pre-task questionnaire before starting the project.In this experiment, your goal is to complete the translation of multiple files in one of two possible translation settings.Please, complete the tasks on your own, even if you know another translator that might be working on this project.The translation setting alternates between texts, with each text requiring a single translation in the assigned setting.The two translation settings are: 1. Translation from scratch.Only the source sentence is provided, you are to write the translation from scratch.
2. Post-editing.The source sentence is provided alongside a translation produced by an MT system.You are to post-edit this MT output.Post-edit the text so you are satisfied with the final translation (the required quality is publishable quality).If the MT output is too time-consuming to fix, you can delete it and start from scratch.However, please do not systematically delete the provided MT output to give your own translation.
Important: All editing MUST happen in the provided PET interface: that is, working in other editors and copy-pasting the text back to PET is NOT ALLOWED, because it invalidates the experiment.This is easy to spot in the log data, so please avoid doing this.Complete the translation of all files sequentially, i.e. in the order presented in the tool.DO NOT SKIP files at your own convenience.Make sure that ALL files are translated when you deliver the tasks.
The aim is to produce publishable professional quality translations for both translation settings.Thus, please translate to your best abilities.You can return to the files and self-review as many times as you think it is necessary.Important: The time invested to translate is recorded while the active unit (sentence) is in editing mode (yellow background).Therefore: • Only start to translate when you are in editing mode (yellow background).In other words, do not start thinking how you will translate a sentence when the active unit is not yet in editing mode (green or red background).• First you will be translating a warmup task, and then the main task.When you are translating each file, you can consult the Source text (ST) by looking up the url in the Excel files that we have sent for reference.
In order to find the correct terminology for the translation you can consult any source in the Internet.Important: However, it is NOT ALLOWED to use any MT engine to find terms or alternatives to translations (such as Google Translate, DeepL, MS Translator or any MT engine available in your language).Using MT engines invalidates the experiment, and will be detected in the log data.Please fill-in the post-task questionnaire ONLY ONCE after completing all the translation tasks (both warmup and main tasks).

D Details on Document Selection and Preprocessing
Document selection Table 8 present the distribution of selected documents from the Flores-101 devtest split based on their domain and the number of sentences that compose them.The first goal in the selection process was to preserve a rough balance between the three categories while including mostly 4 and 5-sentence docs which are faster to edit in PET (no need to frequently close and reopen an editing window).Another objective of the selection was to minimize the chance of translators finding the translated version of the Wikipedia article from which documents were taken and copied from there, despite our guidelines.We thus scrape the articles from Wikipedia and assess the number of available translations.Among the selected documents, only a small subset has translations in other languages (see Figure 6 top, an article can have multiple languages), mainly in Hebrew ( 14), Chinese (10), Spanish (7) and German (5) respectively.
Considering the total number of translations for every article (Figure 6 bottom), we see that roughly 75% of them (79 docs) have no translations.We consider this satisfactory as proof there should not be a large amount of possible copying involved, and we follow up on this evaluation by also ensuring that no repeated copy-paste patterns are present in keylogs after the warmup stage.
Filtering of Outliers For our analysis of Section 4, we only use sentences with an editing time lower than 45 minutes, which was selected heuristically as a reasonably high threshold to allow for extensive searching and thinking.In the following,  Granular editing metrics and overall HTER computed using the Tercom library.cer Character-level HTER score computed between the MT and post-edited outputs.

bleu, chrf
Sentence-level BLEU and ChrF scores between MT and post-edited fields computed using the SacreBLEU library with default parameters.

time_per_char, key_per_char, words_per_hour, words_per_minute
Edit time per source character, expressed in seconds.Proportion of keys per character needed to perform the translation.Amount of source words translated or post-edited per hour/minute subject_visit_order Id denoting the order in which the translator accessed documents in the interface.
Table 9: Description of the main fields associated to every DivEMT data entry.An entry correspond to a translation in a specific modality (HT, PE 1 or PE 2 ) for one of the six target languages we present the identifiers of the sentences that were filtered out during this process.E.g. 54.1 means the first sentence of document 54, having item_id equal to flores101-main-541 in the dataset.Note that the sentences were outliers only for 2/6 languages and were all different, indicating no systematic issues in the sample: ARA: 54.1,100.3,VIE: 3.1,3.2,24.3,28.4,33.1,33.2,40.3,41.2,50.3,100.1,102.1,106.1,107.2,107.4.The 17 sentences were removed for all modalities and languages in the analysis of Section 4 to preserve the validity of our comparison, representing a loss of roughly 4% of the total available data, a tolerable amount for our analysis.
Fields Description Table 9 presents the set of fields that were collected for every entry of the DivEMT dataset.The fields related to keystrokes, times, pauses, annotations and visit order were extracted from the event log of PET .perfiles, while edits information and other MT quality metrics were computed in a second moment with the help of widely-used libraries.
Additional Notes on PET The PET platform was modified to enable a correct right-to-left language visualization, which was necessary for Arabic.

E Other Measurements
CharacTER Across Systems and Languages While HTER is a standard metric adopted both in academic and industrial settings, we also evaluated its character-level variant CharacTER (Wang et al., 2016) to assess whether it could better account for the editing process of morphologically rich languages.Figure 7 presents the CharacTER results.When comparing this plot to the HTER one (Figure 3), we notice that CharacTER preserves the overall trends, but slightly improves the edit rate for Arabic and Turkish with respect to other languages.Nevertheless, we find HTER to correlate slightly better with productivity scores across all tested languages, both at a sentence and at a document level.For this reason, word-level results are reported in the article's main body.Automatic Evaluation of NMT Systems The selection of systems used in this study was driven by a broader evaluation procedure covering more models, metrics and target languages.Table 10 presents the overall results of our evaluation.We use HuggingFace's Transformers library (Wolf et al., 2020) for all neural models, using the default decoding settings without further fine-tuning.All metrics were computed using the default settings of SacreBLEU (Post, 2018) and Comet (Rei et al., 2020).

Inter-subject Variability in Translation Times
Although the variability across different subjects working on the same language directions is not the main concern of our investigation, we produce Figure 8 (an expanded version of Figure 2) to visualize the inter-subject variability for translation times.We observe that the variability across different translators is more pronounced when translating from scratch and that the overall trend of speed improvements associated with PE is mostly preserved (with few exceptions related to the PE 2 modality).

F Full DivEMT Examples
Tables 11 and 12 present two full examples of Di-vEMT entries, including all output modalities, intermediate MT outputs, post-edits and edit highlights for all target languages.We log-transform the dependent variable, edit time in seconds, given its long right tail.The models are built by adding one element at a time, and checking whether such addition leads to a significantly better model with AIC (i.e. if the score gets reduced by at least 2).We fit the models using ML when comparing models that differ in the fixed structure, and REML when they differ in the random structure.
We start with an initial model that just includes the two random intercepts (by-translator and bysegment) and proceed by (i) finding significance for nested document/segment random effect; (ii) adding fixed predictors one by one; (iii) adding interactions between fixed predictors; and (iv) adding the random slopes.
From this sequential procedure, we obtain the resulting model.When checking the homoscedasticity and normality of residuals assumptions (Figures 9 and 10), we find the latter is not fulfilled.Consequently, we remove data points for which observations deviate by more than 2.5 standard deviations from the predicted value by the model (2.4% of the data) and refit the best model on this subset,

Figure 1 :
Figure 1: The DivEMT data collection process.For every English source document, 18 professional translators are tasked to translate it from scratch (HT) or post-edit NMT systems' outputs (PE 1 /PE 2 ) into six typologically diverse target languages.Behavioral data and qualitative assessments are collected during and after the process respectively.

Figure 2 :
Figure2: Temporal effort across languages and translation modalities, measured in seconds per processed source word.Each point represents a document, with higher scores denoting slower editing.↑: amount of data points per language not shown in the plot.

Figure 5 :
Figure 5: Perceived productivity gains (PPG) between the HT and PE translation modalities, assessed for all subjects after task completion.
Identifiers for the item, respective FLORES-101 sentence, translator and translation mode.src_text The original source sentence extracted from Wikinews, wikibooks or wikivoyage.mt_text MT output sentence before post-editing, present only if task_type is 'pe'.tgt_text Final sentence produced by the translator (either from scratch or post-editing mt_text) aligned_edit Aligned visual representation of the machine translation and its post-edit with edit operations edit_time Total editing time for the translation in seconds.

Figure 6 :
Figure 6: Top: Distribution for the availability of documents selected for DivEMT in languages other than English.Bottom: Quantity of selected documents per number of available translations of Wikipedia.

Figure 7 :
Figure 7: Character-level Human-targeted Translation Edit Rate (CharacTER) for Google Translate and mBART-50 post-editing across available languages.

Figure 9 :
Figure 9: Residuals of the final LMER model, used to verify the heteroscedasticity assumption.

Figure 10 :
Figure 10: Quantile-quantile plot before and after the removal of outliers when fitting the LMER model, used to verify the normality assumption.

Table 2 :
MT quality of the selected NMT systems for English-to-Target translation on the full FLORES-101 devtest split, in BLEU / CHRF / COMET format.Best scores are highlighted in bold.We report the number of sentence pairs used for mBART-50 multilingual finetuning byTang et al. (2021).
(Liu et al., 2020)e not replicable as their backends get silently updated over time.Moreover, without knowing the exact training specifics, we cannot attribute differences in the cross-lingual results to intrinsic language properties.We balance these observations by including two NMT systems in our study: Google Translate (GTrans) 5 as a representative of commercial quality, and mBART-50 one-to-Many 6 (Tang et al., 2021) as a representative of state-of-the-art open-source multilingual NMT technology.The original multilingual BART model(Liu et al., 2020)is an encoder-decoder transformer model pre-trained on monolingual documents in 25 languages.Tang et al. (

Table 4 :
Median productivity (PROD, # processed source words per minute) and median % post-editing speedup (∆HT) for all analyzed languages and modalities.Arrows denote the direction of improvement.

Table 5 :
LMER modeling results using translation time as the dependent variable.The reference levels for predictors lang and task are Ukrainian and Translation from scratch (HT), respectively.Estimate impact on edit time for every predictor is provided in log seconds.

Table 6 :
Modality scheduling overview.For each language, each subject (T i ) works with a pseudo-random sequence of modalities (HT, PE 1 , PE 2

•
Do not leave a unit in editing mode (yellow background) while you do something else.If you need to do something unrelated in the middle of a translation then go out

Table 7 :
Subjects information for DivEMT.The last three columns represent respectively the number of years of professional experience as a translator (YoE), the number of years of experience with MT post-editing (YoE w/ PE) and the % of work assignments requiring post-editing in the last 12 months (% PE) for each subject.
Total number of all keystroke categories during the translation.Number of times the translator focused the target sentence texbox during the session.

Table 13 :
Coefficients of the random intercept related to the subject_id variable, representing the identity of the translator performing the translation.