CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation

Renhao Li, Minghuan Tan, Derek Wong, Min Yang


Abstract
In recent years, instruction fine-tuning (IFT) on large language models (LLMs) has garnered considerable attention to enhance model performance on unseen tasks. Attempts have been made on automatic construction and effective selection for IFT data. However, we posit that previous methods have not fully harnessed the potential of LLMs for enhancing data quality. The responses within IFT data could be further enhanced by leveraging the capabilities of LLMs themselves.In this paper, we propose CoEvol, an LLM-based multi-agent cooperation framework for the improvement of responses for instructions. To effectively refine the responses, we develop an iterative framework following a _debate-advise-edit-judge_ paradigm. A two-stage multi-agent debate strategy is further devised to ensure the diversity and reliability of editing suggestions within the framework. Empirically, models equipped with CoEvol outperform competitive baselines evaluated by MT-Bench and AlpacaEval, demonstrating its effectiveness in enhancing instruction-following capabilities for LLMs.
Anthology ID:
2024.emnlp-main.271
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4703–4721
Language:
URL:
https://aclanthology.org/2024.emnlp-main.271
DOI:
Bibkey:
Cite (ACL):
Renhao Li, Minghuan Tan, Derek Wong, and Min Yang. 2024. CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 4703–4721, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation (Li et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.271.pdf
Software:
 2024.emnlp-main.271.software.zip
Data:
 2024.emnlp-main.271.data.zip