P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models

Shuo Yang, Chenchen Yuan, Yao Rong, Felix Steinbauer, Gjergji Kasneci


Abstract
A multitude of industries depend on accurate and reasonable tabular data augmentation for their business processes. Contemporary methodologies in generating tabular data revolve around utilizing Generative Adversarial Networks (GAN) or fine-tuning Large Language Models (LLM). However, GAN-based approaches are documented to produce samples with common-sense errors attributed to the absence of external knowledge. On the other hand, LLM-based methods exhibit a limited capacity to capture the disparities between synthesized and actual data distribution due to the absence of feedback from a discriminator during training. Furthermore, the decoding of LLM-based generation introduces gradient breakpoints, impeding the backpropagation of loss from a discriminator, thereby complicating the integration of these two approaches. To solve this challenge, we propose using proximal policy optimization (PPO) to apply GANs, guiding LLMs to enhance the probability distribution of tabular features. This approach enables the utilization of LLMs as generators for GANs in synthesizing tabular data. Our experiments demonstrate that PPO leads to an approximately 4% improvement in the accuracy of models trained on synthetically generated data over state-of-the-art across three real-world datasets.
Anthology ID:
2024.findings-acl.16
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
248–264
Language:
URL:
https://aclanthology.org/2024.findings-acl.16
DOI:
Bibkey:
Cite (ACL):
Shuo Yang, Chenchen Yuan, Yao Rong, Felix Steinbauer, and Gjergji Kasneci. 2024. P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 248–264, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models (Yang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.16.pdf