The Curious Case of Hallucinations in Neural Machine Translation

Vikas Raunak, Arul Menezes, Marcin Junczys-Dowmunt


Abstract
In this work, we study hallucinations in Neural Machine Translation (NMT), which lie at an extreme end on the spectrum of NMT pathologies. Firstly, we connect the phenomenon of hallucinations under source perturbation to the Long-Tail theory of Feldman, and present an empirically validated hypothesis that explains hallucinations under source perturbation. Secondly, we consider hallucinations under corpus-level noise (without any source perturbation) and demonstrate that two prominent types of natural hallucinations (detached and oscillatory outputs) could be generated and explained through specific corpus-level noise patterns. Finally, we elucidate the phenomenon of hallucination amplification in popular data-generation processes such as Backtranslation and sequence-level Knowledge Distillation. We have released the datasets and code to replicate our results.
Anthology ID:
2021.naacl-main.92
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1172–1183
Language:
URL:
https://aclanthology.org/2021.naacl-main.92
DOI:
10.18653/v1/2021.naacl-main.92
Bibkey:
Cite (ACL):
Vikas Raunak, Arul Menezes, and Marcin Junczys-Dowmunt. 2021. The Curious Case of Hallucinations in Neural Machine Translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1172–1183, Online. Association for Computational Linguistics.
Cite (Informal):
The Curious Case of Hallucinations in Neural Machine Translation (Raunak et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.92.pdf
Video:
 https://aclanthology.org/2021.naacl-main.92.mp4
Code
 vyraun/hallucinations