Differentiable Instruction Optimization for Cross-Task Generalization

Masaru Isonuma, Junichiro Mori, Ichiro Sakata


Abstract
Instruction tuning has been attracting much attention to achieve generalization ability across a wide variety of tasks. Although various types of instructions have been manually created for instruction tuning, it is still unclear what kind of instruction is optimal to obtain cross-task generalization ability. This work presents instruction optimization, which optimizes training instructions with respect to generalization ability. Rather than manually tuning instructions, we introduce learnable instructions and optimize them with gradient descent by leveraging bilevel optimization. Experimental results show that the learned instruction enhances the diversity of instructions and improves the generalization ability compared to using only manually created instructions.
Anthology ID:
2023.findings-acl.667
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10502–10517
Language:
URL:
https://aclanthology.org/2023.findings-acl.667
DOI:
10.18653/v1/2023.findings-acl.667
Bibkey:
Cite (ACL):
Masaru Isonuma, Junichiro Mori, and Ichiro Sakata. 2023. Differentiable Instruction Optimization for Cross-Task Generalization. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10502–10517, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Differentiable Instruction Optimization for Cross-Task Generalization (Isonuma et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.667.pdf