Explaining Errors in Machine Translation with Absolute Gradient Ensembles

Melda Eksi, Erik Gelbing, Jonathan Stieber, Chi Viet Vu


Abstract
Current research on quality estimation of machine translation focuses on the sentence-level quality of the translations. By using explainability methods, we can use these quality estimations for word-level error identification. In this work, we compare different explainability techniques and investigate gradient-based and perturbation-based methods by measuring their performance and required computational efforts. Throughout our experiments, we observed that using absolute word scores boosts the performance of gradient-based explainers significantly. Further, we combine explainability methods to ensembles to exploit the strengths of individual explainers to get better explanations. We propose the usage of absolute gradient-based methods. These work comparably well to popular perturbation-based ones while being more time-efficient.
Anthology ID:
2021.eval4nlp-1.23
Volume:
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Yang Gao, Steffen Eger, Wei Zhao, Piyawat Lertvittayakumjorn, Marina Fomicheva
Venue:
Eval4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
238–249
Language:
URL:
https://aclanthology.org/2021.eval4nlp-1.23
DOI:
10.18653/v1/2021.eval4nlp-1.23
Bibkey:
Cite (ACL):
Melda Eksi, Erik Gelbing, Jonathan Stieber, and Chi Viet Vu. 2021. Explaining Errors in Machine Translation with Absolute Gradient Ensembles. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 238–249, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Explaining Errors in Machine Translation with Absolute Gradient Ensembles (Eksi et al., Eval4NLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eval4nlp-1.23.pdf
Video:
 https://aclanthology.org/2021.eval4nlp-1.23.mp4
Code
 sinisterthaumaturge/metascience-explainable-metrics
Data
MLQE-PE