Yuhao Zhang
Unverified author pages with similar names: Yuhao Zhang
2024
CodeFort: Robust Training for Code Generation Models
Yuhao Zhang | Shiqi Wang | Haifeng Qian | Zijian Wang | Mingyue Shang | Linbo Liu | Sanjay Krishna Gouda | Baishakhi Ray | Murali Krishna Ramanathan | Xiaofei Ma | Anoop Deoras
Findings of the Association for Computational Linguistics: EMNLP 2024
Yuhao Zhang | Shiqi Wang | Haifeng Qian | Zijian Wang | Mingyue Shang | Linbo Liu | Sanjay Krishna Gouda | Baishakhi Ray | Murali Krishna Ramanathan | Xiaofei Ma | Anoop Deoras
Findings of the Association for Computational Linguistics: EMNLP 2024
Code generation models are not robust to small perturbations, which often lead to incorrect generations and significantly degrade the performance of these models. Although improving the robustness of code generation models is crucial to enhancing user experience in real-world applications, existing research efforts do not address this issue. To fill this gap, we propose CodeFort, a framework to improve the robustness of code generation models, generalizing a large variety of code perturbations to enrich the training data and enabling various robust training strategies, mixing data augmentation, batch augmentation, adversarial logits pairing, and contrastive learning, all carefully designed to support high-throughput training. Extensive evaluations show that we increase the average robust pass rates of baseline CodeGen models from 14.79 to 21.74. We notably decrease the robustness drop rate from 95.02% to 54.95% against code-syntax perturbations.
2021
Certified Robustness to Programmable Transformations in LSTMs
Yuhao Zhang | Aws Albarghouthi | Loris D’Antoni
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Yuhao Zhang | Aws Albarghouthi | Loris D’Antoni
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Deep neural networks for natural language processing are fragile in the face of adversarial examples—small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction. We present an approach to certifying the robustness of LSTMs (and extensions of LSTMs) and training models that can be efficiently certified. Our approach can certify robustness to intractably large perturbation spaces defined programmatically in a language of string transformations. Our evaluation shows that (1) our approach can train models that are more robust to combinations of string transformations than those produced using existing techniques; (2) our approach can show high certification accuracy of the resulting models.