Contrastive Decoding Reduces Hallucinations in Large Multilingual Machine Translation Models

Jonas Waldendorf, Barry Haddow, Alexandra Birch


Abstract
In Neural Machine Translation (NMT), models will sometimes generate repetitive or fluent output that is not grounded in the source sentence. This phenomenon is known as hallucination and is a problem even in large-scale multilingual translation models. We propose to use Contrastive Decoding, an algorithm developed to improve generation from unconditional language models, to mitigate hallucinations in NMT. Specifically, we maximise the log-likelihood difference between a model and the same model with reduced contribution from the encoder outputs. Additionally, we propose an alternative implementation of Contrastive Decoding that dynamically weights the difference based on the maximum probability in the output distribution to reduce the effect of CD when the model is confident of its prediction. We evaluate our methods using the Small (418M) and Medium (1.2B) M2M models across 21 low and medium-resource language pairs. Our results show a 14.6 ± 0.5 and 11.0 ± 0.6 maximal increase in the mean COMET scores for the Small and Medium models on those sentences for which the M2M models initially generate a hallucination., respectively.
Anthology ID:
2024.eacl-long.155
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2526–2539
Language:
URL:
https://aclanthology.org/2024.eacl-long.155
DOI:
Bibkey:
Cite (ACL):
Jonas Waldendorf, Barry Haddow, and Alexandra Birch. 2024. Contrastive Decoding Reduces Hallucinations in Large Multilingual Machine Translation Models. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2526–2539, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Contrastive Decoding Reduces Hallucinations in Large Multilingual Machine Translation Models (Waldendorf et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.155.pdf