Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning

Yuanhao Yue, Chengyu Wang, Jun Huang, Peng Wang


Abstract
Instruction tuning aims to align large language models (LLMs) with open-domain instructions and human-preferred responses. While several studies have explored autonomous approaches to distilling and annotating instructions from powerful proprietary LLMs, such as ChatGPT, they often neglect the impact of the distributions and characteristics of tasks, together with the varying difficulty of instructions in training sets. This oversight can lead to imbalanced knowledge capabilities and poor generalization powers of student LLMs. To address these challenges, we introduce Task-Aware Curriculum Planning for Instruction Refinement (TAPIR), a multi-round distillation framework that utilizes an oracle LLM to select instructions that are difficult for a student LLM to follow. To balance the student’s capabilities, task distributions in training sets are adjusted with responses automatically refined according to their corresponding tasks. In addition, by incorporating curriculum planning, our approach systematically escalates the difficulty levels of tasks, progressively enhancing the student LLM’s capabilities. We rigorously evaluate TAPIR using several widely recognized benchmarks (such as AlpacaEval 2.0, MT-Bench, etc.) and multiple student LLMs. Empirical results demonstrate that student LLMs, trained with our method and less training data, outperform larger instruction-tuned models and strong distillation baselines.
Anthology ID:
2024.findings-emnlp.350
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6030–6054
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.350
DOI:
10.18653/v1/2024.findings-emnlp.350
Bibkey:
Cite (ACL):
Yuanhao Yue, Chengyu Wang, Jun Huang, and Peng Wang. 2024. Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6030–6054, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning (Yue et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.350.pdf
Data:
 2024.findings-emnlp.350.data.zip