PlatoLM: Teaching LLMs in Multi-Round Dialogue via a User Simulator

Chuyi Kong, Yaxin Fan, Xiang Wan, Feng Jiang, Benyou Wang


Abstract
The unparalleled performance of closed-sourced ChatGPT has sparked efforts towards its democratization, with notable strides made by leveraging real user and ChatGPT dialogues, as evidenced by Vicuna. However, due to challenges in gathering dialogues involving human participation, current endeavors like Baize and UltraChat rely on ChatGPT conducting roleplay to simulate humans based on instructions, resulting in overdependence on seeds, diminished human-likeness, limited topic diversity, and an absence of genuine multi-round conversational dynamics. To address the above issues, we propose a paradigm to simulate human behavior better and explore the benefits of incorporating more human-like questions in multi-turn conversations. Specifically, we directly target human questions extracted from genuine human-machine conversations as a learning goal and provide a novel user simulator called ‘Socratic‘. The experimental results show our response model, ‘PlatoLM‘, achieves SoTA performance among LLaMA-based 7B models in MT-Bench. Our findings further demonstrate that our method introduces highly human-like questioning patterns and rich topic structures, which can teach the response model better than previous works in multi-round conversations.
Anthology ID:
2024.acl-long.424
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7841–7863
Language:
URL:
https://aclanthology.org/2024.acl-long.424
DOI:
Bibkey:
Cite (ACL):
Chuyi Kong, Yaxin Fan, Xiang Wan, Feng Jiang, and Benyou Wang. 2024. PlatoLM: Teaching LLMs in Multi-Round Dialogue via a User Simulator. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7841–7863, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
PlatoLM: Teaching LLMs in Multi-Round Dialogue via a User Simulator (Kong et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.424.pdf