Quantifying the Evaluation of Heuristic Methods for Textual Data Augmentation

Omid Kashefi, Rebecca Hwa


Abstract
Data augmentation has been shown to be effective in providing more training data for machine learning and resulting in more robust classifiers. However, for some problems, there may be multiple augmentation heuristics, and the choices of which one to use may significantly impact the success of the training. In this work, we propose a metric for evaluating augmentation heuristics; specifically, we quantify the extent to which an example is “hard to distinguish” by considering the difference between the distribution of the augmented samples of different classes. Experimenting with multiple heuristics in two prediction tasks (positive/negative sentiment and verbosity/conciseness) validates our claims by revealing the connection between the distribution difference of different classes and the classification accuracy.
Anthology ID:
2020.wnut-1.26
Volume:
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
Month:
November
Year:
2020
Address:
Online
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
200–208
Language:
URL:
https://aclanthology.org/2020.wnut-1.26
DOI:
10.18653/v1/2020.wnut-1.26
Bibkey:
Cite (ACL):
Omid Kashefi and Rebecca Hwa. 2020. Quantifying the Evaluation of Heuristic Methods for Textual Data Augmentation. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 200–208, Online. Association for Computational Linguistics.
Cite (Informal):
Quantifying the Evaluation of Heuristic Methods for Textual Data Augmentation (Kashefi & Hwa, WNUT 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.wnut-1.26.pdf