2025
pdf
bib
abs
sDPO: Don’t Use Your Data All at Once
Dahyun Kim
|
Yungi Kim
|
Wonho Song
|
Hyeonwoo Kim
|
Yunsu Kim
|
Sanghoon Kim
|
Chanjun Park
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
As large language models (LLMs) continue to advance, aligning them with human preferences has become a critical objective. In this paper, we introduce stepwise DPO (sDPO), an innovative extension of the recently popularized Direct Preference Optimization (DPO) technique for alignment tuning. sDPO systematically partitions the available preference datasets and applies them incrementally, rather than utilizing the entire dataset simultaneously. This stepwise manner enables the integration of progressively more aligned reference models within the DPO training framework. Our empirical results demonstrate that sDPO not only enhances the alignment precision of reference models but also significantly improves the overall performance of the final model, surpassing other prominent LLMs with larger parameter counts.
2024
pdf
bib
abs
Evalverse: Unified and Accessible Library for Large Language Model Evaluation
Jihoo Kim
|
Wonho Song
|
Dahyun Kim
|
Yunsu Kim
|
Yungi Kim
|
Chanjun Park
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
This paper introduces Evalverse, a novel library that streamlines the evaluation of Large Language Models (LLMs) by unifying disparate evaluation tools into a single, user-friendly framework. Evalverse enables individuals with limited knowledge of artificial intelligence to easily request LLM evaluations and receive detailed reports, facilitated by an integration with communication platforms like Slack. Thus, Evalverse serves as a powerful tool for the comprehensive assessment of LLMs, offering both researchers and practitioners a centralized and easily accessible evaluation framework. Finally, we also provide a demo video for Evalverse, showcasing its capabilities and implementation in a two-minute format.
pdf
bib
abs
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Sanghoon Kim
|
Dahyun Kim
|
Chanjun Park
|
Wonsung Lee
|
Wonho Song
|
Yunsu Kim
|
Hyeonwoo Kim
|
Yungi Kim
|
Hyeonju Lee
|
Jihoo Kim
|
Changbae Ahn
|
Seonghoon Yang
|
Sukyung Lee
|
Hyunbyung Park
|
Gyoungjin Gim
|
Mikyoung Cha
|
Hwalsuk Lee
|
Sunghun Kim
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)
We introduce SOLAR 10.7B, a large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. Inspired by recent efforts to efficiently up-scale LLMs, we present a method for scaling LLMs called depth up-scaling (DUS), which encompasses depthwise scaling and continued pretraining. In contrast to other LLM up-scaling methods that use mixture-of-experts, DUS does not require complex changes to train and inference efficiently. We show experimentally that DUS is simple yet effective in scaling up high-performance LLMs from small ones. Building on the DUS model, we additionally present SOLAR 10.7B-Instruct, a variant fine-tuned for instruction-following capabilities, surpassing Mixtral-8x7B-Instruct. SOLAR 10.7B is publicly available under the Apache 2.0 license, promoting broad access and application in the LLM field.