Memory-Efficient Fine-Tuning of Transformers via Token Selection

Antoine Simoulin, Namyong Park, Xiaoyi Liu, Grey Yang


Abstract
Fine-tuning provides an effective means to specialize pre-trained models for various downstream tasks. However, fine-tuning often incurs high memory overhead, especially for large transformer-based models, such as LLMs. While existing methods may reduce certain parts of the memory required for fine-tuning, they still require caching all intermediate activations computed in the forward pass to update weights during the backward pass. In this work, we develop TokenTune, a method to reduce memory usage, specifically the memory to store intermediate activations, in the fine-tuning of transformer-based models. During the backward pass, TokenTune approximates the gradient computation by backpropagating through just a subset of input tokens. Thus, with TokenTune, only a subset of intermediate activations are cached during the forward pass. Also, TokenTune can be easily combined with existing methods like LoRA, further reducing the memory cost. We evaluate our approach on pre-trained transformer models with up to billions of parameters, considering the performance on multiple downstream tasks such as text classification and question answering in a few-shot learning setup. Overall, TokenTune achieves performance on par with full fine-tuning or representative memory-efficient fine-tuning methods, while greatly reducing the memory footprint, especially when combined with other methods with complementary memory reduction mechanisms. We hope that our approach will facilitate the fine-tuning of large transformers, in specializing them for specific domains or co-training them with other neural components from a larger system. Our code is available at https://github.com/facebookresearch/tokentune.
Anthology ID:
2024.emnlp-main.1202
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21565–21580
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1202
DOI:
Bibkey:
Cite (ACL):
Antoine Simoulin, Namyong Park, Xiaoyi Liu, and Grey Yang. 2024. Memory-Efficient Fine-Tuning of Transformers via Token Selection. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21565–21580, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Memory-Efficient Fine-Tuning of Transformers via Token Selection (Simoulin et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1202.pdf