A Simple, Yet Effective Approach to Finding Biases in Code Generation

Spyridon Mouselinos, Mateusz Malinowski, Henryk Michalewski


Abstract
Recently, high-performing code generation systems based on large language models have surfaced. They are trained on massive corpora containing much more natural text than actual executable computer code. This work shows that current code generation systems exhibit undesired biases inherited from their large language model backbones, which can reduce the quality of the generated code under specific circumstances. To investigate the effect, we propose the “block of influence” concept, which enables a modular decomposition and analysis of the coding challenges. We introduce an automated intervention mechanism reminiscent of adversarial testing that exposes undesired biases through the failure modes of the models under test. Finally, we demonstrate how our framework can be used as a data transformation technique during fine-tuning, acting as a mitigation strategy for these biases.
Anthology ID:
2023.findings-acl.718
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11299–11329
Language:
URL:
https://aclanthology.org/2023.findings-acl.718
DOI:
10.18653/v1/2023.findings-acl.718
Bibkey:
Cite (ACL):
Spyridon Mouselinos, Mateusz Malinowski, and Henryk Michalewski. 2023. A Simple, Yet Effective Approach to Finding Biases in Code Generation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11299–11329, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
A Simple, Yet Effective Approach to Finding Biases in Code Generation (Mouselinos et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.718.pdf