Specialist or Generalist? Instruction Tuning for Specific NLP Tasks

Chufan Shi, Yixuan Su, Cheng Yang, Yujiu Yang, Deng Cai


Abstract
The potential of large language models (LLMs) to simultaneously perform a wide range of natural language processing (NLP) tasks has been the subject of extensive research. Although instruction tuning has proven to be a data-efficient method for transforming LLMs into such generalist models, their performance still lags behind specialist models trained exclusively for specific tasks. In this paper, we investigate whether incorporating broadcoverage generalist instruction tuning can contribute to building a specialist model. We hypothesize that its efficacy depends on task specificity and skill requirements. Our experiments assess four target tasks with distinct coverage levels, revealing that integrating generalist instruction tuning consistently enhances model performance when the task coverage is broad. The effect is particularly pronounced when the amount of task-specific training data is limited. Further investigation into three target tasks focusing on different capabilities demonstrates that generalist instruction tuning improves understanding and reasoning abilities. However, for tasks requiring factual knowledge, generalist data containing hallucinatory information may negatively affect the model’s performance. Overall, our work provides a systematic guide for developing specialist models with general instruction tuning.
Anthology ID:
2023.emnlp-main.947
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15336–15348
Language:
URL:
https://aclanthology.org/2023.emnlp-main.947
DOI:
10.18653/v1/2023.emnlp-main.947
Bibkey:
Cite (ACL):
Chufan Shi, Yixuan Su, Cheng Yang, Yujiu Yang, and Deng Cai. 2023. Specialist or Generalist? Instruction Tuning for Specific NLP Tasks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15336–15348, Singapore. Association for Computational Linguistics.
Cite (Informal):
Specialist or Generalist? Instruction Tuning for Specific NLP Tasks (Shi et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.947.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.947.mp4