Secoco: Self-Correcting Encoding for Neural Machine Translation

Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei Li, Hang Li, Deyi Xiong


Abstract
This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with noisy input for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. We will make our code and dataset publicly available soon.
Anthology ID:
2021.findings-emnlp.396
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4639–4644
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.396
DOI:
10.18653/v1/2021.findings-emnlp.396
Bibkey:
Cite (ACL):
Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei Li, Hang Li, and Deyi Xiong. 2021. Secoco: Self-Correcting Encoding for Neural Machine Translation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4639–4644, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Secoco: Self-Correcting Encoding for Neural Machine Translation (Wang et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.396.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.396.mp4