Surgical Feature-Space Decomposition of LLMs: Why, When and How?

Arnav Chavan, Nahush Lele, Deepak Gupta


Abstract
Low-rank approximations, of the weight and feature space can enhance the performance of deep learning models, whether in terms of improving generalization or reducing the latency of inference. However, there is no clear consensus yet on how, when and why these approximations are helpful for large language models (LLMs). In this work, we empirically study the efficacy of weight and feature space decomposition in transformer-based LLMs. We demonstrate that surgical decomposition not only provides critical insights into the trade-off between compression and language modelling performance, but also sometimes enhances commonsense reasoning performance of LLMs. Our empirical analysis identifies specific network segments that intrinsically exhibit a low-rank structure. Furthermore, we extend our investigation to the implications of low-rank approximations on model bias. Overall, our findings offer a novel perspective on optimizing LLMs, presenting the low-rank approximation not only as a tool for performance enhancements, but also as a means to potentially rectify biases within these models.
Anthology ID:
2024.acl-long.130
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2389–2400
Language:
URL:
https://aclanthology.org/2024.acl-long.130
DOI:
Bibkey:
Cite (ACL):
Arnav Chavan, Nahush Lele, and Deepak Gupta. 2024. Surgical Feature-Space Decomposition of LLMs: Why, When and How?. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2389–2400, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Surgical Feature-Space Decomposition of LLMs: Why, When and How? (Chavan et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.130.pdf