Erik Gelbing


2021

pdf bib
Explaining Errors in Machine Translation with Absolute Gradient Ensembles
Melda Eksi | Erik Gelbing | Jonathan Stieber | Chi Viet Vu
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems

Current research on quality estimation of machine translation focuses on the sentence-level quality of the translations. By using explainability methods, we can use these quality estimations for word-level error identification. In this work, we compare different explainability techniques and investigate gradient-based and perturbation-based methods by measuring their performance and required computational efforts. Throughout our experiments, we observed that using absolute word scores boosts the performance of gradient-based explainers significantly. Further, we combine explainability methods to ensembles to exploit the strengths of individual explainers to get better explanations. We propose the usage of absolute gradient-based methods. These work comparably well to popular perturbation-based ones while being more time-efficient.