Quadapter: Adapter for GPT-2 Quantization

Minseop Park, Jaeseong You, Markus Nagel, Simyung Chang


Abstract
Transformer language models such as GPT-2 are difficult to quantize because of outliers in the activations leading to a large quantization error. To adapt to the error, one must use quantization-aware training, which entails a fine-tuning process based on the dataset and the training pipeline identical to those for the original model. Pretrained language models, however, often do not grant access to their datasets and training pipelines, forcing us to rely on arbitrary ones for fine-tuning. In that case, it is observed that quantization-aware training overfits the model to the fine-tuning data. To this end introduced is a quantization adapter (Quadapter), a small set of parameters that are learned to make activations quantization-friendly by scaling them channel-wise. For quantization without overfitting, we introduce a quantization adapter (Quadapter), a small set of parameters that are learned to make activations quantization-friendly by scaling them channel-wise. It keeps the model parameters unchanged. By applying our method to the challenging task of quantizing GPT-2, we demonstrate that it effectively prevents the overfitting and improves the quantization performance.
Anthology ID:
2022.findings-emnlp.185
Original:
2022.findings-emnlp.185v1
Version 2:
2022.findings-emnlp.185v2
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2510–2517
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.185
DOI:
10.18653/v1/2022.findings-emnlp.185
Bibkey:
Cite (ACL):
Minseop Park, Jaeseong You, Markus Nagel, and Simyung Chang. 2022. Quadapter: Adapter for GPT-2 Quantization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2510–2517, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Quadapter: Adapter for GPT-2 Quantization (Park et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.185.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.185.mp4