Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation

Gabriel Orlanski, Alex Gittens


Abstract
Answering a programming question with only its title is difficult as salient contextual information is left out. To address this, we present a corpus of over 40,000 StackOverflow question texts to be used in conjunction with the corresponding intents from the CoNaLa dataset (Yin et al., 2018). Using both the intent and the question body, we use BART to establish a baseline BLEU score of 34.35 for this new task. We then find further improvements of 2.8% by combining the mined CoNaLa data with the labeled data to achieve a 35.32 BLEU score. We then evaluate the prior state-of-the-art CoNaLa models with this additional data. We find that our proposed method of using the body and mined data beats that of the previous state-of-the-art by a 71.96% BLEU score. Finally, we perform ablations that prove that BART is an unsupervised multimodal learner and examine its extractive behavior.
Anthology ID:
2021.nlp4prog-1.8
Volume:
Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)
Month:
August
Year:
2021
Address:
Online
Venue:
NLP4Prog
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–76
Language:
URL:
https://aclanthology.org/2021.nlp4prog-1.8
DOI:
10.18653/v1/2021.nlp4prog-1.8
Bibkey:
Cite (ACL):
Gabriel Orlanski and Alex Gittens. 2021. Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation. In Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021), pages 65–76, Online. Association for Computational Linguistics.
Cite (Informal):
Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation (Orlanski & Gittens, NLP4Prog 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.nlp4prog-1.8.pdf
Code
 gabeorlanski/stackoverflow-encourages-cheating
Data
CoNaLa-ExtCoNaLaCodeSearchNetJuICe