Small Models are Valuable Plug-ins for Large Language Models

Canwen Xu, Yichong Xu, Shuohang Wang, Yang Liu, Chenguang Zhu, Julian McAuley


Abstract
Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their weights are often publicly unavailable and their immense sizes make the models difficult to be tuned with common hardware. As a result, effectively tuning these models with large-scale supervised data can be challenging. As an alternative, In-Context Learning (ICL) can only use a small number of supervised examples due to context length limits. In this paper, we propose Super In-Context Learning (SuperICL) which allows black-box LLMs to work with locally fine-tuned smaller models, resulting in superior performance on supervised tasks. Our experiments demonstrate that SuperICL can improve performance beyond state-of-the-art fine-tuned models while addressing the instability problem of in-context learning.
Anthology ID:
2024.findings-acl.18
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
283–294
Language:
URL:
https://aclanthology.org/2024.findings-acl.18
DOI:
10.18653/v1/2024.findings-acl.18
Bibkey:
Cite (ACL):
Canwen Xu, Yichong Xu, Shuohang Wang, Yang Liu, Chenguang Zhu, and Julian McAuley. 2024. Small Models are Valuable Plug-ins for Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 283–294, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Small Models are Valuable Plug-ins for Large Language Models (Xu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.18.pdf