Jiyoung Lee
2024
KorNAT: LLM Alignment Benchmark for Korean Social Values and Common Knowledge
Jiyoung Lee
|
Minwoo Kim
|
Seungho Kim
|
Junghwan Kim
|
Seunghyun Won
|
Hwaran Lee
|
Edward Choi
Findings of the Association for Computational Linguistics: ACL 2024
To reliably deploy Large Language Models (LLMs) in a specific country, they must possess an understanding of the nation’s culture and basic knowledge. To this end, we introduce National Alignment, which measures the alignment between an LLM and a targeted country from two aspects: social value alignment and common knowledge alignment. We constructed KorNAT, the first benchmark that measures national alignment between LLMs and South Korea. KorNat contains 4K and 6K multiple-choice questions for social value and common knowledge, respectively. To attain an appropriately aligned ground truth in the social value dataset, we conducted a large-scale public survey with 6,174 South Koreans. For common knowledge, we created the data based on the South Korea text books and GED exams. Our dataset creation process is meticulously designed based on statistical sampling theory, and we also introduce metrics to measure national alignment, including three variations of social value alignment. We tested seven LLMs and found that only few models passed our reference score, indicating there exists room for improvement. Our dataset has received government approval following an assessment by a government-affiliated organization dedicated to evaluating dataset quality.
2022
Specializing Multi-domain NMT via Penalizing Low Mutual Information
Jiyoung Lee
|
Hantae Kim
|
Hyunchang Cho
|
Edward Choi
|
Cheonbok Park
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Multi-domain Neural Machine Translation (NMT) trains a single model with multiple domains. It is appealing because of its efficacy in handling multiple domains within one model. An ideal multi-domain NMT learns distinctive domain characteristics simultaneously, however, grasping the domain peculiarity is a non-trivial task. In this paper, we investigate domain-specific information through the lens of mutual information (MI) and propose a new objective that penalizes low MI to become higher.Our method achieved the state-of-the-art performance among the current competitive multi-domain NMT models. Also, we show our objective promotes low MI to be higher resulting in domain-specialized multi-domain NMT.
Search
Co-authors
- Edward Choi 2
- Minwoo Kim 1
- Seungho Kim 1
- Junghwan Kim 1
- Seunghyun Won 1
- show all...