How Certain is Your Transformer?

Artem Shelmanov, Evgenii Tsymbalov, Dmitri Puzyrev, Kirill Fedyanin, Alexander Panchenko, Maxim Panov


Abstract
In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.
Anthology ID:
2021.eacl-main.157
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1833–1840
Language:
URL:
https://aclanthology.org/2021.eacl-main.157
DOI:
10.18653/v1/2021.eacl-main.157
Bibkey:
Cite (ACL):
Artem Shelmanov, Evgenii Tsymbalov, Dmitri Puzyrev, Kirill Fedyanin, Alexander Panchenko, and Maxim Panov. 2021. How Certain is Your Transformer?. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1833–1840, Online. Association for Computational Linguistics.
Cite (Informal):
How Certain is Your Transformer? (Shelmanov et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.157.pdf
Code
 skoltech-nlp/certain-transformer
Data
CoLAGLUEMRPCSSTSST-2