Accountable Error Characterization

Amita Misra, Zhe Liu, Jalal Mahmud


Abstract
Customers of machine learning systems demand accountability from the companies employing these algorithms for various prediction tasks. Accountability requires understanding of system limit and condition of erroneous predictions, as customers are often interested in understanding the incorrect predictions, and model developers are absorbed in finding methods that can be used to get incremental improvements to an existing system. Therefore, we propose an accountable error characterization method, AEC, to understand when and where errors occur within the existing black-box models. AEC, as constructed with human-understandable linguistic features, allows the model developers to automatically identify the main sources of errors for a given classification system. It can also be used to sample for the set of most informative input points for a next round of training. We perform error detection for a sentiment analysis task using AEC as a case study. Our results on the sample sentiment task show that AEC is able to characterize erroneous predictions into human understandable categories and also achieves promising results on selecting erroneous samples when compared with the uncertainty-based sampling.
Anthology ID:
2021.trustnlp-1.4
Volume:
Proceedings of the First Workshop on Trustworthy Natural Language Processing
Month:
June
Year:
2021
Address:
Online
Editors:
Yada Pruksachatkun, Anil Ramakrishna, Kai-Wei Chang, Satyapriya Krishna, Jwala Dhamala, Tanaya Guha, Xiang Ren
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
28–33
Language:
URL:
https://aclanthology.org/2021.trustnlp-1.4
DOI:
10.18653/v1/2021.trustnlp-1.4
Bibkey:
Cite (ACL):
Amita Misra, Zhe Liu, and Jalal Mahmud. 2021. Accountable Error Characterization. In Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 28–33, Online. Association for Computational Linguistics.
Cite (Informal):
Accountable Error Characterization (Misra et al., TrustNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.trustnlp-1.4.pdf
Data
SST