Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models

Neal Lawton, Anoop Kumar, Govind Thattai, Aram Galstyan, Greg Ver Steeg


Abstract
Parameter-efficient tuning (PET) methods fit pre-trained language models (PLMs) to downstream tasks by either computing a small compressed update for a subset of model parameters, or appending and fine-tuning a small number of new model parameters to the pre-trained network. Hand-designed PET architectures from the literature perform well in practice, but have the potential to be improved via automated neural architecture search (NAS). We propose an efficient NAS method for learning PET architectures via structured and unstructured pruning. We present experiments on GLUE demonstrating the effectiveness of our algorithm and discuss how PET architectural design choices affect performance in practice.
Anthology ID:
2023.findings-acl.539
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8506–8515
Language:
URL:
https://aclanthology.org/2023.findings-acl.539
DOI:
10.18653/v1/2023.findings-acl.539
Bibkey:
Cite (ACL):
Neal Lawton, Anoop Kumar, Govind Thattai, Aram Galstyan, and Greg Ver Steeg. 2023. Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8506–8515, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models (Lawton et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.539.pdf
Video:
 https://aclanthology.org/2023.findings-acl.539.mp4