Kyunghoon Bae
2024
Deep Exploration of Cross-Lingual Zero-Shot Generalization in Instruction Tuning
Janghoon Han
|
Changho Lee
|
Joongbo Shin
|
Stanley Jungkyu Choi
|
Honglak Lee
|
Kyunghoon Bae
Findings of the Association for Computational Linguistics: ACL 2024
Instruction tuning has emerged as a powerful technique, significantly boosting zero-shot performance on unseen tasks. While recent work has explored cross-lingual generalization by applying instruction tuning to multilingual models, previous studies have primarily focused on English, with a limited exploration of non-English tasks. For in-depth exploration of cross-lingual generalization in instruction tuning, we perform instruction tuning individually for two distinct language meta-datasets. Subsequently, we assess the performance on unseen tasks in the language different from the one used for training. To facilitate this investigation, we introduce a novel non-English meta-dataset named “KORANI” (Korean Natural Instruction), comprising 51 Korean benchmarks. Moreover, we design cross-lingual templates to mitigate discrepancies in language and instruction-format of the template between training and inference within the cross-lingual setting. Our experiments reveal consistent improvements through cross-lingual generalization in both English and Korean, outperforming baseline by average scores of 20.7% and 13.6%, respectively. Remarkably, these enhancements are comparable to those achieved by mono-lingual instruction tuning and even surpass them in some tasks. The result underscores the significance of relevant data acquisition across languages over linguistic congruence with unseen tasks during instruction tuning.
2023
On Sample-Efficient Code Generation
Hojae Han
|
Yu Jin Kim
|
Byoungjip Kim
|
Youngwon Lee
|
Kyungjae Lee
|
Kyungmin Lee
|
Moontae Lee
|
Kyunghoon Bae
|
Seung-won Hwang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large language models often struggle to predict runtime behavior in code generation tasks, leading to a reliance on rejection sampling (best-of-n) to generate multiple code snippets then select the best. Our distinction is reducing sampling costs, without compromising generation quality. We introduce EFFICODE, a novel framework that prioritizes sampling on test problems that models can solve. We show how EFFICODE estimates solvability to optimize computational costs during multiple sampling. Based on empirical evidence, EFFICODE consistently demonstrates reduced sampling budgets while maintaining comparable code generation performance, especially when problems are challenging. In addition, utilizing EFFICODE to rank sampled code snippets also shows its effectiveness in answer code selection for reducing temporal costs, by not requiring any execution or test case generation.
2022
ANNA: Enhanced Language Representation for Question Answering
Changwook Jun
|
Hansol Jang
|
Myoseop Sim
|
Hyun Kim
|
Jooyoung Choi
|
Kyungkoo Min
|
Kyunghoon Bae
Proceedings of the 7th Workshop on Representation Learning for NLP
Pre-trained language models have brought significant improvements in performance in a variety of natural language processing tasks. Most existing models performing state-of-the-art results have shown their approaches in the separate perspectives of data processing, pre-training tasks, neural network modeling, or fine-tuning. In this paper, we demonstrate how the approaches affect performance individually, and that the language model performs the best results on a specific question answering task when those approaches are jointly considered in pre-training models. In particular, we propose an extended pre-training task, and a new neighbor-aware mechanism that attends neighboring tokens more to capture the richness of context for pre-training language modeling. Our best model achieves new state-of-the-art results of 95.7% F1 and 90.6% EM on SQuAD 1.1 and also outperforms existing pre-trained language models such as RoBERTa, ALBERT, ELECTRA, and XLNet on the SQuAD 2.0 benchmark.
Search
Co-authors
- Hojae Han 1
- Yu Jin Kim 1
- Byoungjip Kim 1
- Youngwon Lee 1
- Kyungjae Lee 1
- show all...