ConTest: A Unit Test Completion Benchmark featuring Context

Johannes Villmow, Jonas Depoix, Adrian Ulges


Abstract
We introduce CONTEST, a benchmark for NLP-based unit test completion, the task of predicting a test’s assert statements given its setup and focal method, i.e. the method to be tested. ConTest is large-scale (with 365k datapoints). Besides the test code and tested code, it also features context code called by either. We found context to be crucial for accurately predicting assertions. We also introduce baselines based on transformer encoder-decoders, and study the effects of including syntactic information and context. Overall, our models achieve a BLEU score of 38.2, while only generating unparsable code in 1.92% of cases.
Anthology ID:
2021.nlp4prog-1.2
Volume:
Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Royi Lachmy, Ziyu Yao, Greg Durrett, Milos Gligoric, Junyi Jessy Li, Ray Mooney, Graham Neubig, Yu Su, Huan Sun, Reut Tsarfaty
Venue:
NLP4Prog
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–25
Language:
URL:
https://aclanthology.org/2021.nlp4prog-1.2
DOI:
10.18653/v1/2021.nlp4prog-1.2
Bibkey:
Cite (ACL):
Johannes Villmow, Jonas Depoix, and Adrian Ulges. 2021. ConTest: A Unit Test Completion Benchmark featuring Context. In Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021), pages 17–25, Online. Association for Computational Linguistics.
Cite (Informal):
ConTest: A Unit Test Completion Benchmark featuring Context (Villmow et al., NLP4Prog 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.nlp4prog-1.2.pdf