Evaluating Synthetic Data Generation from User Generated Text

Jenny Chim, Julia Ive, Maria Liakata


Abstract
User-generated content provides a rich resource to study social and behavioral phenomena. Although its application potential is currently limited by the paucity of expert labels and the privacy risks inherent in personal data, synthetic data can help mitigate this bottleneck. In this work, we introduce an evaluation framework to facilitate research on synthetic language data generation for user-generated text. We define a set of aspects for assessing data quality, namely, style preservation, meaning preservation, and divergence, as a proxy for privacy. We introduce metrics corresponding to each aspect. Moreover, through a set of generation strategies and representative tasks and baselines across domains, we demonstrate the relation between the quality aspects of synthetic user generated content, generation strategies, metrics, and downstream performance. To our knowledge, our work is the first unified evaluation framework for user-generated text in relation to the specified aspects, offering both intrinsic and extrinsic evaluation. We envisage it will facilitate developments towards shareable, high-quality synthetic language data.
Anthology ID:
2025.cl-1.6
Volume:
Computational Linguistics, Volume 51, Issue 1 - March 2025
Month:
March
Year:
2025
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
191–233
Language:
URL:
https://aclanthology.org/2025.cl-1.6/
DOI:
10.1162/coli_a_00540
Bibkey:
Cite (ACL):
Jenny Chim, Julia Ive, and Maria Liakata. 2025. Evaluating Synthetic Data Generation from User Generated Text. Computational Linguistics, 51(1):191–233.
Cite (Informal):
Evaluating Synthetic Data Generation from User Generated Text (Chim et al., CL 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.cl-1.6.pdf