Shaghayegh Momtaz


2024

pdf bib
Impact of Syntactic Complexity on the Processes and Performance of Large Language Models-leveraged Post-editing
Longhui Zou | Michael Carl | Shaghayegh Momtaz | Mehdi Mirzapour
Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 2: Presentations)

This research explores the interaction between human translators and Large Language Models (LLMs) during post-editing (PE). The study examines the impact of syntactic complexity on the PE processes and performance, specifically when working with the raw translation output generated by GPT-4. We selected four English source texts (STs) from previous American Translators Association (ATA) certification examinations. Each text is about 10 segments, with 250 words. GPT-4 was employed to translate the four STs from English into simplified Chinese. The empirical experiment simulated the authentic work environment of PE, using professional computer-assisted translation (CAT) tool, Trados. The experiment involved 46 participants with different levels of translation expertise (30 student translators and 16 expert translators), producing altogether 2162 segments of PE versions. We implemented five syntactic complexity metrics in the context of PE for quantitative analysis.