Leonardo Chaves Dutra Da Rocha


2024

pdf bib
Explaining the Hardest Errors of Contextual Embedding Based Classifiers
Claudio Moisés Valiense De Andrade | Washington Cunha | Guilherme Fonseca | Ana Clara Souza Pagano | Luana De Castro Santos | Adriana Silvina Pagano | Leonardo Chaves Dutra Da Rocha | Marcos André Gonçalves
Proceedings of the 28th Conference on Computational Natural Language Learning

We seek to explain the causes of the misclassification of the most challenging documents, namely those that no classifier using state-of-the-art, very semantically-separable contextual embedding representations managed to predict accurately. To do so, we propose a taxonomy of incorrect predictions, which we used to perform qualitative human evaluation. We posed two (research) questions, considering three sentiment datasets in two different domains – movie and product reviews. Evaluators with two different backgrounds evaluated documents by comparing the predominant sentiment assigned by the model to the label in the gold dataset in order to decide on a likely misclassification reason. Based on a high inter-evaluator agreement (81.7%), we observed significant differences between the product and movie review domains, such as the prevalence of ambivalence in product reviews and sarcasm in movie reviews. Our analysis also revealed an unexpectedly high rate of incorrect labeling in the gold dataset (up to 33%) and a significant amount of incorrect prediction by the model due to a series of linguistic phenomena (including amplified words, contrastive markers, comparative sentences, and references to world knowledge). Overall, our taxonomy and methodology allow us to explain between 80%-85% of the errors with high confidence (agreement) – enabling us to point out where future efforts to improve models should be concentrated.