AutoAugment Is What You Need: Enhancing Rule-based Augmentation Methods in Low-resource Regimes

Juhwan Choi, Kyohoon Jin, Junho Lee, Sangmin Song, YoungBin Kim


Abstract
Text data augmentation is a complex problem due to the discrete nature of sentences. Although rule-based augmentation methods are widely adopted in real-world applications because of their simplicity, they suffer from potential semantic damage. Previous researchers have suggested easy data augmentation with soft labels (softEDA), employing label smoothing to mitigate this problem. However, finding the best factor for each model and dataset is challenging; therefore, using softEDA in real-world applications is still difficult. In this paper, we propose adapting AutoAugment to solve this problem. The experimental results suggest that the proposed method can boost existing augmentation methods and that rule-based methods can enhance cutting-edge pretrained language models. We offer the source code.
Anthology ID:
2024.eacl-srw.1
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Neele Falk, Sara Papi, Mike Zhang
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–8
Language:
URL:
https://aclanthology.org/2024.eacl-srw.1
DOI:
Bibkey:
Cite (ACL):
Juhwan Choi, Kyohoon Jin, Junho Lee, Sangmin Song, and YoungBin Kim. 2024. AutoAugment Is What You Need: Enhancing Rule-based Augmentation Methods in Low-resource Regimes. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 1–8, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
AutoAugment Is What You Need: Enhancing Rule-based Augmentation Methods in Low-resource Regimes (Choi et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-srw.1.pdf