Enhancing Dialogue State Tracking Models through LLM-backed User-Agents Simulation

Cheng Niu, Xingguang Wang, Xuxin Cheng, Juntong Song, Tong Zhang


Abstract
Dialogue State Tracking (DST) is designed to monitor the evolving dialogue state in the conversations and plays a pivotal role in developing task-oriented dialogue systems. However, obtaining the annotated data for the DST task is usually a costly endeavor. In this paper, we focus on employing LLMs to generate dialogue data to reduce dialogue collection and annotation costs. Specifically, GPT-4 is used to simulate the user and agent interaction, generating thousands of dialogues annotated with DST labels. Then a two-stage fine-tuning on LLaMA 2 is performed on the generated data and the real data for the DST prediction. Experimental results on two public DST benchmarks show that with the generated dialogue data, our model performs better than the baseline trained solely on real data. In addition, our approach is also capable of adapting to the dynamic demands in real-world scenarios, generating dialogues in new domains swiftly. After replacing dialogue segments in any domain with the corresponding generated ones, the model achieves comparable performance to the model trained on real data. The source code and generated dialogue data are available at https://github.com/ParticleMedia/LUAS.
Anthology ID:
2024.acl-long.473
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8724–8741
Language:
URL:
https://aclanthology.org/2024.acl-long.473
DOI:
Bibkey:
Cite (ACL):
Cheng Niu, Xingguang Wang, Xuxin Cheng, Juntong Song, and Tong Zhang. 2024. Enhancing Dialogue State Tracking Models through LLM-backed User-Agents Simulation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8724–8741, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Enhancing Dialogue State Tracking Models through LLM-backed User-Agents Simulation (Niu et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.473.pdf