Tuna: Instruction Tuning using Feedback from Large Language Models

Haoran Li, Yiran Liu, Xingxing Zhang, Wei Lu, Furu Wei


Abstract
Instruction tuning of open-source large language models (LLMs) like LLaMA, using direct outputs from more powerful LLMs such as Instruct-GPT and GPT-4, has proven to be a cost-effective way to align model behaviors with human preferences. However, the instruction-tuned model has only seen one response per instruction, lacking the knowledge of potentially better responses. In this paper, we propose finetuning an instruction-tuned LLM using our novel probabilistic ranking and contextual ranking approaches to increase the likelihood of generating better responses. Probabilistic ranking enables the instruction-tuned model to inherit the relative rankings of high-quality and low-quality responses from the teacher LLM. On the other hand, learning with contextual ranking allows the model to refine its own response distribution using the contextual understanding ability of stronger LLMs. Furthermore, we apply probabilistic ranking and contextual ranking sequentially to the instruction-tuned LLM. The resulting model, which we call Tuna, consistently improves the performance on Super Natural Instructions (119 test tasks), LMentry (25 test tasks), Vicuna QA, and can even obtain better results than several strong reinforcement learning baselines. Our code and data are available at https://github.com/microsoft/LMOps.
Anthology ID:
2023.findings-emnlp.1011
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15146–15163
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1011
DOI:
10.18653/v1/2023.findings-emnlp.1011
Bibkey:
Cite (ACL):
Haoran Li, Yiran Liu, Xingxing Zhang, Wei Lu, and Furu Wei. 2023. Tuna: Instruction Tuning using Feedback from Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15146–15163, Singapore. Association for Computational Linguistics.
Cite (Informal):
Tuna: Instruction Tuning using Feedback from Large Language Models (Li et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.1011.pdf