Evolutionary Data Measures: Understanding the Difficulty of Text Classification Tasks

Edward Collins, Nikolai Rozanov, Bingbing Zhang


Abstract
Classification tasks are usually analysed and improved through new model architectures or hyperparameter optimisation but the underlying properties of datasets are discovered on an ad-hoc basis as errors occur. However, understanding the properties of the data is crucial in perfecting models. In this paper we analyse exactly which characteristics of a dataset best determine how difficult that dataset is for the task of text classification. We then propose an intuitive measure of difficulty for text classification datasets which is simple and fast to calculate. We empirically prove that this measure generalises to unseen data by comparing it to state-of-the-art datasets and results. This measure can be used to analyse the precise source of errors in a dataset and allows fast estimation of how difficult a dataset is to learn. We searched for this measure by training 12 classical and neural network based models on 78 real-world datasets, then use a genetic algorithm to discover the best measure of difficulty. Our difficulty-calculating code and datasets are publicly available.
Anthology ID:
K18-1037
Volume:
Proceedings of the 22nd Conference on Computational Natural Language Learning
Month:
October
Year:
2018
Address:
Brussels, Belgium
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
380–391
Language:
URL:
https://aclanthology.org/K18-1037
DOI:
10.18653/v1/K18-1037
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/K18-1037.pdf
Code
 Wluper/edm
Data
SST