Chris Bayliss


2024

pdf bib
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)
Carolina Scarton | Charlotte Prescott | Chris Bayliss | Chris Oakley | Joanna Wright | Stuart Wrigley | Xingyi Song | Edward Gow-Smith | Rachel Bawden | Víctor M Sánchez-Cartagena | Patrick Cadwell | Ekaterina Lapshinova-Koltunski | Vera Cabarrão | Konstantinos Chatzitheodorou | Mary Nurminen | Diptesh Kanojia | Helena Moniz
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)

pdf bib
A Case Study on Contextual Machine Translation in a Professional Scenario of Subtitling
Sebastian Vincent | Charlotte Prescott | Chris Bayliss | Chris Oakley | Carolina Scarton
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)

Incorporating extra-textual context such as film metadata into the machine translation (MT) pipeline can enhance translation quality, as indicated by automatic evaluation in recent work. However, the positive impact of such systems in industry remains unproven. We report on an industrial case study carried out to investigate the benefit of MT in a professional scenario of translating TV subtitles with a focus on how leveraging extra-textual context impacts post-editing. We found that post-editors marked significantly fewer context-related errors when correcting the outputs of MTCue, the context-aware model, as opposed to non-contextual models. We also present the results of a survey of the employed post-editors, which highlights contextual inadequacy as a significant gap consistently observed in MT. Our findings strengthen the motivation for further work within fully contextual MT.

pdf bib
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)
Carolina Scarton | Charlotte Prescott | Chris Bayliss | Chris Oakley | Joanna Wright | Stuart Wrigley | Xingyi Song | Edward Gow-Smith | Mikel Forcada | Helena Moniz
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)

pdf bib
Reference-less Analysis of Context Specificity in Translation with Personalised Language Models
Sebastian Vincent | Rowanne Sumner | Alice Dowek | Charlotte Prescott | Emily Preston | Chris Bayliss | Chris Oakley | Carolina Scarton
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Sensitising language models (LMs) to external context helps them to more effectively capture the speaking patterns of individuals with specific characteristics or in particular environments. This work investigates to what extent detailed character and film annotations can be leveraged to personalise LMs in a scalable manner. We then explore the use of such models in evaluating context specificity in machine translation. We build LMs which leverage rich contextual information to reduce perplexity by up to 6.5% compared to a non-contextual model, and generalise well to a scenario with no speaker-specific data, relying on combinations of demographic characteristics expressed via metadata. Our findings are consistent across two corpora, one of which (Cornell-rich) is also a contribution of this paper. We then use our personalised LMs to measure the co-occurrence of extra-textual context and translation hypotheses in a machine translation setting. Our results suggest that the degree to which professional translations in our domain are context-specific can be preserved to a better extent by a contextual machine translation model than a non-contextual model, which is also reflected in the contextual model’s superior reference-based scores.