Training with Adversaries to Improve Faithfulness of Attention in Neural Machine Translation

Pooya Moradi, Nishant Kambhatla, Anoop Sarkar


Abstract
Can we trust that the attention heatmaps produced by a neural machine translation (NMT) model reflect its true internal reasoning? We isolate and examine in detail the notion of faithfulness in NMT models. We provide a measure of faithfulness for NMT based on a variety of stress tests where model parameters are perturbed and measuring faithfulness based on how often the model output changes. We show that our proposed faithfulness measure for NMT models can be improved using a novel differentiable objective that rewards faithful behaviour by the model through probability divergence. Our experimental results on multiple language pairs show that our objective function is effective in increasing faithfulness and can lead to a useful analysis of NMT model behaviour and more trustworthy attention heatmaps. Our proposed objective improves faithfulness without reducing the translation quality and it also seems to have a useful regularization effect on the NMT model and can even improve translation quality in some cases.
Anthology ID:
2020.aacl-srw.14
Volume:
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop
Month:
December
Year:
2020
Address:
Suzhou, China
Editors:
Boaz Shmueli, Yin Jou Huang
Venue:
AACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
93–100
Language:
URL:
https://aclanthology.org/2020.aacl-srw.14
DOI:
Bibkey:
Cite (ACL):
Pooya Moradi, Nishant Kambhatla, and Anoop Sarkar. 2020. Training with Adversaries to Improve Faithfulness of Attention in Neural Machine Translation. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 93–100, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Training with Adversaries to Improve Faithfulness of Attention in Neural Machine Translation (Moradi et al., AACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.aacl-srw.14.pdf