IST-Unbabel 2021 Submission for the Explainable Quality Estimation Shared Task

Marcos Treviso, Nuno M. Guerreiro, Ricardo Rei, André F. T. Martins


Abstract
We present the joint contribution of Instituto Superior Técnico (IST) and Unbabel to the Explainable Quality Estimation (QE) shared task, where systems were submitted to two tracks: constrained (without word-level supervision) and unconstrained (with word-level supervision). For the constrained track, we experimented with several explainability methods to extract the relevance of input tokens from sentence-level QE models built on top of multilingual pre-trained transformers. Among the different tested methods, composing explanations in the form of attention weights scaled by the norm of value vectors yielded the best results. When word-level labels are used during training, our best results were obtained by using word-level predicted probabilities. We further improve the performance of our methods on the two tracks by ensembling explanation scores extracted from models trained with different pre-trained transformers, achieving strong results for in-domain and zero-shot language pairs.
Anthology ID:
2021.eval4nlp-1.14
Volume:
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Yang Gao, Steffen Eger, Wei Zhao, Piyawat Lertvittayakumjorn, Marina Fomicheva
Venue:
Eval4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
133–145
Language:
URL:
https://aclanthology.org/2021.eval4nlp-1.14
DOI:
10.18653/v1/2021.eval4nlp-1.14
Bibkey:
Cite (ACL):
Marcos Treviso, Nuno M. Guerreiro, Ricardo Rei, and André F. T. Martins. 2021. IST-Unbabel 2021 Submission for the Explainable Quality Estimation Shared Task. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 133–145, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
IST-Unbabel 2021 Submission for the Explainable Quality Estimation Shared Task (Treviso et al., Eval4NLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eval4nlp-1.14.pdf
Video:
 https://aclanthology.org/2021.eval4nlp-1.14.mp4
Code
 deep-spin/explainable-qe-shared-task