Mini-DA: Improving Your Model Performance through Minimal Data Augmentation using LLM

Shuangtao Yang, Xiaoyi Liu, Xiaozheng Dong, Bo Fu


Abstract
When performing data augmentation using large language models (LLMs), the common approach is to directly generate a large number of new samples based on the original dataset, and then model is trained on the integration of augmented dataset and the original dataset. However, data generation demands extensive computational resources. In this study, we propose Mini-DA, a minimized data augmentation method that leverages the feedback from the target model during the training process to select only the most challenging samples from the validation set for augmentation. Our experimental results show in text classification task, by using as little as 13 percent of the original augmentation volume, Mini-DA can achieve performance comparable to full data augmentation for intent detection task, significantly improving data and computational resource utilization efficiency.
Anthology ID:
2024.dash-1.4
Volume:
Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Eduard Dragut, Yunyao Li, Lucian Popa, Slobodan Vucetic, Shashank Srivastava
Venues:
DaSH | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25–30
Language:
URL:
https://aclanthology.org/2024.dash-1.4
DOI:
10.18653/v1/2024.dash-1.4
Bibkey:
Cite (ACL):
Shuangtao Yang, Xiaoyi Liu, Xiaozheng Dong, and Bo Fu. 2024. Mini-DA: Improving Your Model Performance through Minimal Data Augmentation using LLM. In Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024), pages 25–30, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Mini-DA: Improving Your Model Performance through Minimal Data Augmentation using LLM (Yang et al., DaSH-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.dash-1.4.pdf