Deokyeong Kang
2024
Revisiting the Impact of Pursuing Modularity for Code Generation
Deokyeong Kang
|
KiJung Seo
|
Taeuk Kim
Findings of the Association for Computational Linguistics: EMNLP 2024
Modular programming, which aims to construct the final program by integrating smaller, independent building blocks, has been regarded as a desirable practice in software development. However, with the rise of recent code generation agents built upon large language models (LLMs), a question emerges: is this traditional practice equally effective for these new tools? In this work, we assess the impact of modularity in code generation by introducing a novel metric for its quantitative measurement. Surprisingly, unlike conventional wisdom on the topic, we find that modularity is not a core factor for improving the performance of code generation models. We also explore potential explanations for why LLMs do not exhibit a preference for modular code compared to non-modular code.
2023
X-SNS: Cross-Lingual Transfer Prediction through Sub-Network Similarity
Taejun Yun
|
Jinhyeon Kim
|
Deokyeong Kang
|
Seonghoon Lim
|
Jihoon Kim
|
Taeuk Kim
Findings of the Association for Computational Linguistics: EMNLP 2023
Cross-lingual transfer (XLT) is an emergent ability of multilingual language models that preserves their performance on a task to a significant extent when evaluated in languages that were not included in the fine-tuning process. While English, due to its widespread usage, is typically regarded as the primary language for model adaption in various tasks, recent studies have revealed that the efficacy of XLT can be amplified by selecting the most appropriate source languages based on specific conditions. In this work, we propose the utilization of sub-network similarity between two languages as a proxy for predicting the compatibility of the languages in the context of XLT. Our approach is model-oriented, better reflecting the inner workings of foundation models. In addition, it requires only a moderate amount of raw text from candidate languages, distinguishing it from the majority of previous methods that rely on external resources. In experiments, we demonstrate that our method is more effective than baselines across diverse tasks. Specifically, it shows proficiency in ranking candidates for zero-shot XLT, achieving an improvement of 4.6% on average in terms of NDCG@3. We also provide extensive analyses that confirm the utility of sub-networks for XLT prediction.
Search
Co-authors
- Taeuk Kim 2
- Taejun Yun 1
- Jinhyeon Kim 1
- Seonghoon Lim 1
- Jihoon Kim 1
- show all...