Tobias van der Werff
2022
Automatic Discrimination of Human and Neural Machine Translation: A Study with Multiple Pre-Trained Models and Longer Context
Tobias van der Werff
|
Rik van Noord
|
Antonio Toral
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
We address the task of automatically distinguishing between human-translated (HT) and machine translated (MT) texts. Following recent work, we fine-tune pre-trained language models (LMs) to perform this task. Our work differs in that we use state-of-the-art pre-trained LMs, as well as the test sets of the WMT news shared tasks as training data, to ensure the sentences were not seen during training of the MT system itself. Moreover, we analyse performance for a number of different experimental setups, such as adding translationese data, going beyond the sentence-level and normalizing punctuation. We show that (i) choosing a state-of-the-art LM can make quite a difference: our best baseline system (DeBERTa) outperforms both BERT and RoBERTa by over 3% accuracy, (ii) adding translationese data is only beneficial if there is not much data available, (iii) considerable improvements can be obtained by classifying at the document-level and (iv) normalizing punctuation and thus avoiding (some) shortcuts has no impact on model performance.
MaCoCu: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages
Marta Bañón
|
Miquel Esplà-Gomis
|
Mikel L. Forcada
|
Cristian García-Romero
|
Taja Kuzman
|
Nikola Ljubešić
|
Rik van Noord
|
Leopoldo Pla Sempere
|
Gema Ramírez-Sánchez
|
Peter Rupnik
|
Vít Suchomel
|
Antonio Toral
|
Tobias van der Werff
|
Jaume Zaragoza
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
We introduce the project “MaCoCu: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages”, funded by the Connecting Europe Facility, which is aimed at building monolingual and parallel corpora for under-resourced European languages. The approach followed consists of crawling large amounts of textual data from carefully selected top-level domains of the Internet, and then applying a curation and enrichment pipeline. In addition to corpora, the project will release successive versions of the free/open-source web crawling and curation software used.