Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models Clara Na author Sanket Vaibhav Mehta author Emma Strubell author 2022-12 text Findings of the Association for Computational Linguistics: EMNLP 2022 Yoav Goldberg editor Zornitsa Kozareva editor Yue Zhang editor Association for Computational Linguistics Abu Dhabi, United Arab Emirates conference publication na-etal-2022-train 10.18653/v1/2022.findings-emnlp.361 https://aclanthology.org/2022.findings-emnlp.361/ 2022-12 4909 4936