Claudia Wiesinger
2024
Bayesian Hierarchical Modelling for Analysing the Effect of Speech Synthesis on Post-Editing Machine Translation
Miguel Rios
|
Justus Brockmann
|
Claudia Wiesinger
|
Raluca Chereji
|
Alina Secară
|
Dragoș Ciobanu
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)
Automatic speech synthesis has seen rapid development and integration in domains as diverse as accessibility services, translation, or language learning platforms. We analyse its integration in a post-editing machine translation (PEMT) environment and the effect this has on quality, productivity, and cognitive effort. We use Bayesian hierarchical modelling to analyse eye-tracking, time-tracking, and error annotation data resulting from an experiment involving 21 professional translators post-editing from English into German in a customised cloud-based CAT environment and listening to the source and/or target texts via speech synthesis. Using speech synthesis in a PEMT task has a non-substantial positive effect on quality, a substantial negative effect on productivity, and a substantial negative effect on the cognitive effort expended on the target text, signifying that participants need to allocate less cognitive effort to the target text.
2022
Error Annotation in Post-Editing Machine Translation: Investigating the Impact of Text-to-Speech Technology
Justus Brockmann
|
Claudia Wiesinger
|
Dragoș Ciobanu
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
As post-editing of machine translation (PEMT) is becoming one of the most dominant services offered by the language services industry (LSI), efforts are being made to support provision of this service with additional technology. We present text-to-speech (T2S) as a potential attention-raising technology for post-editors. Our study was conducted with university students and included both PEMT and error annotation of a creative text with and without T2S. Focusing on the error annotation data, our analysis finds that participants under-annotated fewer MT errors in the T2S condition compared to the silent condition. At the same time, more over-annotation was recorded. Finally, annotation performance corresponds to participants’ attitudes towards using T2S.