@inproceedings{zhang-yang-2025-extracting,
title = "Extracting the Essence and Discarding the Dross: Enhancing Code Generation with Contrastive Execution Feedback",
author = "Zhang, Xuanyu and
Yang, Qing",
editor = "Rambow, Owen and
Wanner, Leo and
Apidianaki, Marianna and
Al-Khalifa, Hend and
Eugenio, Barbara Di and
Schockaert, Steven",
booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
month = jan,
year = "2025",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.coling-main.704/",
pages = "10569--10575",
abstract = "Recent advancements have integrated the execution process and feedback into the training of large language models for code generation, demonstrating enhanced model performance. However, current methods amalgamate erroneous code with feedback and the final correct code as target sentences, inadvertently increasing the probability of generating both correct and incorrect code during inference. While multiple iterations of feedback can eventually yield the correct answer, this iterative process is cumbersome and time-consuming for users who prefer immediate accurate results. To address this challenge, we propose ConCoder, a contrastive learning-based code generation model with execution feedback. This approach enables the model to efficiently produce accurate code from the outset while rectifying and optimizing the incorrect code. Furthermore, our training emphasizes learning from the causes of errors, allowing the model to understand and avoid mistakes. Through extensive experiments, ConCoder demonstrates significant improvements in generating accurate code and understanding error correction, paving the way for more reliable code generation models."
}
<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="zhang-yang-2025-extracting">
<titleInfo>
<title>Extracting the Essence and Discarding the Dross: Enhancing Code Generation with Contrastive Execution Feedback</title>
</titleInfo>
<name type="personal">
<namePart type="given">Xuanyu</namePart>
<namePart type="family">Zhang</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Qing</namePart>
<namePart type="family">Yang</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2025-01</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the 31st International Conference on Computational Linguistics</title>
</titleInfo>
<name type="personal">
<namePart type="given">Owen</namePart>
<namePart type="family">Rambow</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Leo</namePart>
<namePart type="family">Wanner</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Marianna</namePart>
<namePart type="family">Apidianaki</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Hend</namePart>
<namePart type="family">Al-Khalifa</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Barbara</namePart>
<namePart type="given">Di</namePart>
<namePart type="family">Eugenio</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Steven</namePart>
<namePart type="family">Schockaert</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Abu Dhabi, UAE</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
</relatedItem>
<abstract>Recent advancements have integrated the execution process and feedback into the training of large language models for code generation, demonstrating enhanced model performance. However, current methods amalgamate erroneous code with feedback and the final correct code as target sentences, inadvertently increasing the probability of generating both correct and incorrect code during inference. While multiple iterations of feedback can eventually yield the correct answer, this iterative process is cumbersome and time-consuming for users who prefer immediate accurate results. To address this challenge, we propose ConCoder, a contrastive learning-based code generation model with execution feedback. This approach enables the model to efficiently produce accurate code from the outset while rectifying and optimizing the incorrect code. Furthermore, our training emphasizes learning from the causes of errors, allowing the model to understand and avoid mistakes. Through extensive experiments, ConCoder demonstrates significant improvements in generating accurate code and understanding error correction, paving the way for more reliable code generation models.</abstract>
<identifier type="citekey">zhang-yang-2025-extracting</identifier>
<location>
<url>https://aclanthology.org/2025.coling-main.704/</url>
</location>
<part>
<date>2025-01</date>
<extent unit="page">
<start>10569</start>
<end>10575</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T Extracting the Essence and Discarding the Dross: Enhancing Code Generation with Contrastive Execution Feedback
%A Zhang, Xuanyu
%A Yang, Qing
%Y Rambow, Owen
%Y Wanner, Leo
%Y Apidianaki, Marianna
%Y Al-Khalifa, Hend
%Y Eugenio, Barbara Di
%Y Schockaert, Steven
%S Proceedings of the 31st International Conference on Computational Linguistics
%D 2025
%8 January
%I Association for Computational Linguistics
%C Abu Dhabi, UAE
%F zhang-yang-2025-extracting
%X Recent advancements have integrated the execution process and feedback into the training of large language models for code generation, demonstrating enhanced model performance. However, current methods amalgamate erroneous code with feedback and the final correct code as target sentences, inadvertently increasing the probability of generating both correct and incorrect code during inference. While multiple iterations of feedback can eventually yield the correct answer, this iterative process is cumbersome and time-consuming for users who prefer immediate accurate results. To address this challenge, we propose ConCoder, a contrastive learning-based code generation model with execution feedback. This approach enables the model to efficiently produce accurate code from the outset while rectifying and optimizing the incorrect code. Furthermore, our training emphasizes learning from the causes of errors, allowing the model to understand and avoid mistakes. Through extensive experiments, ConCoder demonstrates significant improvements in generating accurate code and understanding error correction, paving the way for more reliable code generation models.
%U https://aclanthology.org/2025.coling-main.704/
%P 10569-10575
Markdown (Informal)
[Extracting the Essence and Discarding the Dross: Enhancing Code Generation with Contrastive Execution Feedback](https://aclanthology.org/2025.coling-main.704/) (Zhang & Yang, COLING 2025)
ACL