AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models Se Jung Kwon author Jeonghoon Kim author Jeongin Bae author Kang Min Yoo author Jin-Hwa Kim author Baeseong Park author Byeongwook Kim author Jung-Woo Ha author Nako Sung author Dongsoo Lee author 2022-12 text Findings of the Association for Computational Linguistics: EMNLP 2022 Yoav Goldberg editor Zornitsa Kozareva editor Yue Zhang editor Association for Computational Linguistics Abu Dhabi, United Arab Emirates conference publication kwon-etal-2022-alphatuning 10.18653/v1/2022.findings-emnlp.240 https://aclanthology.org/2022.findings-emnlp.240/ 2022-12 3288 3305