A Critical Study of What Code-LLMs (Do Not) Learn

Abhinav Anand, Shweta Verma, Krishna Narasimhan, Mira Mezini


Abstract
Large Language Models trained on code corpora (code-LLMs) have demonstrated impressive performance in various coding assistance tasks. However, despite their increased size and training dataset, code-LLMs still have limitations such as suggesting codes with syntactic errors, variable misuse etc. Some studies argue that code-LLMs perform well on coding tasks because they use self-attention and hidden representations to encode relations among input tokens. However, previous works have not studied what code properties are not encoded by code-LLMs. In this paper, we conduct a fine-grained analysis of attention maps and hidden representations of code-LLMs. Our study indicates that code-LLMs only encode relations among specific subsets of input tokens. Specifically, by categorizing input tokens into syntactic tokens and identifiers, we found that models encode relations among syntactic tokens and among identifiers, but they fail to encode relations between syntactic tokens and identifiers. We also found that fine-tuned models encode these relations poorly compared to their pre-trained counterparts. Additionally, larger models with billions of parameters encode significantly less information about code than models with only a few hundred million parameters.
Anthology ID:
2024.findings-acl.939
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15869–15889
Language:
URL:
https://aclanthology.org/2024.findings-acl.939
DOI:
10.18653/v1/2024.findings-acl.939
Bibkey:
Cite (ACL):
Abhinav Anand, Shweta Verma, Krishna Narasimhan, and Mira Mezini. 2024. A Critical Study of What Code-LLMs (Do Not) Learn. In Findings of the Association for Computational Linguistics: ACL 2024, pages 15869–15889, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
A Critical Study of What Code-LLMs (Do Not) Learn (Anand et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.939.pdf