Measuring and Improving Faithfulness of Attention in Neural Machine Translation

Pooya Moradi, Nishant Kambhatla, Anoop Sarkar


Abstract
While the attention heatmaps produced by neural machine translation (NMT) models seem insightful, there is little evidence that they reflect a model’s true internal reasoning. We provide a measure of faithfulness for NMT based on a variety of stress tests where attention weights which are crucial for prediction are perturbed and the model should alter its predictions if the learned weights are a faithful explanation of the predictions. We show that our proposed faithfulness measure for NMT models can be improved using a novel differentiable objective that rewards faithful behaviour by the model through probability divergence. Our experimental results on multiple language pairs show that our objective function is effective in increasing faithfulness and can lead to a useful analysis of NMT model behaviour and more trustworthy attention heatmaps. Our proposed objective improves faithfulness without reducing the translation quality and has a useful regularization effect on the NMT model and can even improve translation quality in some cases.
Anthology ID:
2021.eacl-main.243
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2791–2802
Language:
URL:
https://aclanthology.org/2021.eacl-main.243
DOI:
10.18653/v1/2021.eacl-main.243
Bibkey:
Cite (ACL):
Pooya Moradi, Nishant Kambhatla, and Anoop Sarkar. 2021. Measuring and Improving Faithfulness of Attention in Neural Machine Translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2791–2802, Online. Association for Computational Linguistics.
Cite (Informal):
Measuring and Improving Faithfulness of Attention in Neural Machine Translation (Moradi et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.243.pdf