PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models

Zhuocheng Gong, Jiahao Liu, Qifan Wang, Yang Yang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Rui Yan


Abstract
While transformer-based pre-trained language models (PLMs) have dominated a number of NLP applications, these models are heavy to deploy and expensive to use. Therefore, effectively compressing large-scale PLMs becomes an increasingly important problem. Quantization, which represents high-precision tensors with low-bit fix-point format, is a viable solution. However, most existing quantization methods are task-specific, requiring customized training and quantization with a large number of trainable parameters on each individual task. Inspired by the observation that the over-parameterization nature of PLMs makes it possible to freeze most of the parameters during the fine-tuning stage, in this work, we propose a novel “quantize before fine-tuning” framework, PreQuant, that differs from both quantization-aware training and post-training quantization. {pasted macro ‘OUR’} is compatible with various quantization strategies, with outlier-aware parameter-efficient fine-tuning incorporated to correct the induced quantization error. We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5. We also provide an empirical investigation into the workflow of PreQuant, which sheds light on its efficacy.
Anthology ID:
2023.findings-acl.511
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8065–8079
Language:
URL:
https://aclanthology.org/2023.findings-acl.511
DOI:
10.18653/v1/2023.findings-acl.511
Bibkey:
Cite (ACL):
Zhuocheng Gong, Jiahao Liu, Qifan Wang, Yang Yang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, and Rui Yan. 2023. PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8065–8079, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models (Gong et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.511.pdf