Sui He


2024

pdf bib
Prompting ChatGPT for Translation: A Comparative Analysis of Translation Brief and Persona Prompts
Sui He
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)

Prompt engineering has shown potential for improving translation quality in LLMs. However, the possibility of using translation concepts in prompt design remains largely underexplored. Against this backdrop, the current paper discusses the effectiveness of incorporating the conceptual tool of “translation brief” and the personas of “translator” and “author” into prompt design for translation tasks in ChatGPT. Findings suggest that, although certain elements are constructive in facilitating human-to-human communication for translation tasks, their effectiveness is limited for improving translation quality in ChatGPT. This accentuates the need for explorative research on how translation theorists and practitioners can develop the current set of conceptual tools rooted in the human-to-human communication paradigm for translation purposes in this emerging workflow involving human-machine interaction, and how translation concepts developed in translation studies can inform the training of GPT models for translation tasks.

pdf bib
Exploring NMT Explainability for Translators Using NMT Visualising Tools
Gabriela Gonzalez-Saez | Mariam Nakhle | James Turner | Fabien Lopez | Nicolas Ballier | Marco Dinarelli | Emmanuelle Esperança-Rodier | Sui He | Raheel Qader | Caroline Rossi | Didier Schwab | Jun Yang
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)

This paper describes work in progress on Visualisation tools to foster collaborations between translators and computational scientists. We aim to describe how visualisation features can be used to explain translation and NMT outputs. We tested several visualisation functionalities with three NMT models based on Chinese-English, Spanish-English and French-English language pairs. We created three demos containing different visualisation tools and analysed them within the framework of performance-explainability, focusing on the translator’s perspective.

pdf bib
The MAKE-NMTViz Project: Meaningful, Accurate and Knowledge-limited Explanations of NMT Systems for Translators
Gabriela Gonzalez-Saez | Fabien Lopez | Mariam Nakhle | James Turner | Nicolas Ballier | Marco Dinarelli | Emmanuelle Esperança-Rodier | Sui He | Caroline Rossi | Didier Schwab | Jun Yang
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)

This paper describes MAKE-NMTViz, a project designed to help translators visualize neural machine translation outputs using explainable artificial intelligence visualization tools initially developed for computer vision.

2023

pdf bib
The MAKE-NMTVIZ System Description for the WMT23 Literary Task
Fabien Lopez | Gabriela González | Damien Hansen | Mariam Nakhle | Behnoosh Namdarzadeh | Nicolas Ballier | Marco Dinarelli | Emmanuelle Esperança-Rodier | Sui He | Sadaf Mohseni | Caroline Rossi | Didier Schwab | Jun Yang | Jean-Baptiste Yunès | Lichao Zhu
Proceedings of the Eighth Conference on Machine Translation

This paper describes the MAKE-NMTVIZ Systems trained for the WMT 2023 Literary task. As a primary submission, we used Train, Valid1, test1 as part of the GuoFeng corpus (Wang et al., 2023) to fine-tune the mBART50 model with Chinese-English data. We followed very similar training parameters to (Lee et al. 2022) when fine-tuning mBART50. We trained for 3 epochs, using gelu as an activation function, with a learning rate of 0.05, dropout of 0.1 and a batch size of 16. We decoded using a beam search of size 5. For our contrastive1 submission, we implemented a fine-tuned concatenation transformer (Lupo et al., 2023). The training was developed in two steps: (i) a sentence-level transformer was implemented for 10 epochs trained using general, test1, and valid1 data (more details in contrastive2 system); (ii) second, we fine-tuned at document-level using 3-sentence concatenation for 4 epochs using train, test2, and valid2 data. During the fine-tuning, we used ReLU as an activation function, with an inverse square root learning rate, dropout of 0.1, and a batch size of 64. We decoded using a beam search of size. Four our contrastive2 and last submission, we implemented a sentence-level transformer model (Vaswani et al., 2017). The model was trained with general data for 10 epochs using general-purpose, test1, and valid 1 data. The training parameters were an inverse square root scheduled learning rate, a dropout of 0.1, and a batch size of 64. We decoded using a beam search of size 4. We then compared the three translation outputs from an interdisciplinary perspective, investigating some of the effects of sentence- vs document-based training. Computer scientists, translators and corpus linguists discussed the linguistic remaining issues for this discourse-level literary translation.