Compression of Generative Pre-trained Language Models via Quantization Chaofan Tao author Lu Hou author Wei Zhang author Lifeng Shang author Xin Jiang author Qun Liu author Ping Luo author Ngai Wong author 2022-05 text Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Smaranda Muresan editor Preslav Nakov editor Aline Villavicencio editor Association for Computational Linguistics Dublin, Ireland conference publication tao-etal-2022-compression 10.18653/v1/2022.acl-long.331 https://aclanthology.org/2022.acl-long.331/ 2022-05 4821 4836