Evolutionary Data Measures: Understanding the Difficulty of Text Classification Tasks
Edward
Collins
author
Nikolai
Rozanov
author
Bingbing
Zhang
author
2018-10
text
Proceedings of the 22nd Conference on Computational Natural Language Learning
Anna
Korhonen
editor
Ivan
Titov
editor
Association for Computational Linguistics
Brussels, Belgium
conference publication
Classification tasks are usually analysed and improved through new model architectures or hyperparameter optimisation but the underlying properties of datasets are discovered on an ad-hoc basis as errors occur. However, understanding the properties of the data is crucial in perfecting models. In this paper we analyse exactly which characteristics of a dataset best determine how difficult that dataset is for the task of text classification. We then propose an intuitive measure of difficulty for text classification datasets which is simple and fast to calculate. We empirically prove that this measure generalises to unseen data by comparing it to state-of-the-art datasets and results. This measure can be used to analyse the precise source of errors in a dataset and allows fast estimation of how difficult a dataset is to learn. We searched for this measure by training 12 classical and neural network based models on 78 real-world datasets, then use a genetic algorithm to discover the best measure of difficulty. Our difficulty-calculating code and datasets are publicly available.
collins-etal-2018-evolutionary
10.18653/v1/K18-1037
https://aclanthology.org/K18-1037
2018-10
380
391