Learning Feature Weights using Reward Modeling for Denoising Parallel Corpora

Gaurav Kumar, Philipp Koehn, Sanjeev Khudanpur


Abstract
Large web-crawled corpora represent an excellent resource for improving the performance of Neural Machine Translation (NMT) systems across several language pairs. However, since these corpora are typically extremely noisy, their use is fairly limited. Current approaches to deal with this problem mainly focus on filtering using heuristics or single features such as language model scores or bi-lingual similarity. This work presents an alternative approach which learns weights for multiple sentence-level features. These feature weights which are optimized directly for the task of improving translation performance, are used to score and filter sentences in the noisy corpora more effectively. We provide results of applying this technique to building NMT systems using the Paracrawl corpus for Estonian-English and show that it beats strong single feature baselines and hand designed combinations. Additionally, we analyze the sensitivity of this method to different types of noise and explore if the learned weights generalize to other language pairs using the Maltese-English Paracrawl corpus.
Anthology ID:
2021.wmt-1.118
Volume:
Proceedings of the Sixth Conference on Machine Translation
Month:
November
Year:
2021
Address:
Online
Editors:
Loic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Tom Kocmi, Andre Martins, Makoto Morishita, Christof Monz
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1100–1109
Language:
URL:
https://aclanthology.org/2021.wmt-1.118
DOI:
Bibkey:
Cite (ACL):
Gaurav Kumar, Philipp Koehn, and Sanjeev Khudanpur. 2021. Learning Feature Weights using Reward Modeling for Denoising Parallel Corpora. In Proceedings of the Sixth Conference on Machine Translation, pages 1100–1109, Online. Association for Computational Linguistics.
Cite (Informal):
Learning Feature Weights using Reward Modeling for Denoising Parallel Corpora (Kumar et al., WMT 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.wmt-1.118.pdf
Video:
 https://aclanthology.org/2021.wmt-1.118.mp4
Data
ParaCrawl