Hyokun Yun


2024

pdf bib
Evolutionary Contrastive Distillation for Language Model Alignment
Julian Katz-Samuels | Zheng Li | Hyokun Yun | Priyanka Nigam | Yi Xu | Vaclav Petricek | Bing Yin | Trishul Chilimbi
Findings of the Association for Computational Linguistics: EMNLP 2024

The ability of large language models (LLMs) to execute complex instructions is essential for their real-world applications. However, several recent studies indicate that LLMs struggle with challenging instructions. In this paper, we propose Evolutionary Contrastive Distillation (ECD), a novel method for generating high-quality synthetic preference data designed to enhance the complex instruction-following capability of language models. ECD generates data that specifically illustrates the difference between a response that successfully follows a set of complex instructions and a response that is high-quality, but nevertheless makes some subtle mistakes. This is done by prompting LLMs to progressively evolve simple instructions to more complex instructions. When the complexity of an instruction is increased, the original successful response to the original instruction becomes a “hard negative” response for the new instruction, mostly meeting requirements of the new instruction, but barely missing one or two. By pairing a good response with such a hard negative response, and employing contrastive learning algorithms such as DPO, we improve language models’ ability to follow complex instructions. Empirically, we observe that our method yields a 7B model that exceeds the complex instruction-following performance of current SOTA 7B models and is competitive even with open-source 70B models.

2022

pdf bib
MICO: Selective Search with Mutual Information Co-training
Zhanyu Wang | Xiao Zhang | Hyokun Yun | Choon Hui Teo | Trishul Chilimbi
Proceedings of the 29th International Conference on Computational Linguistics

In contrast to traditional exhaustive search, selective search first clusters documents into several groups before all the documents are searched exhaustively by a query, to limit the search executed within one group or only a few groups. Selective search is designed to reduce the latency and computation in modern large-scale search systems. In this study, we propose MICO, a Mutual Information CO-training framework for selective search with minimal supervision using the search logs. After training, MICO does not only cluster the documents, but also routes unseen queries to the relevant clusters for efficient retrieval. In our empirical experiments, MICO significantly improves the performance on multiple metrics of selective search and outperforms a number of existing competitive baselines.

2019

pdf bib
Robustness to Capitalization Errors in Named Entity Recognition
Sravan Bodapati | Hyokun Yun | Yaser Al-Onaizan
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

Robustness to capitalization errors is a highly desirable characteristic of named entity recognizers, yet we find standard models for the task are surprisingly brittle to such noise. Existing methods to improve robustness to the noise completely discard given orthographic information, which significantly degrades their performance on well-formed text. We propose a simple alternative approach based on data augmentation, which allows the model to learn to utilize or ignore orthographic information depending on its usefulness in the context. It achieves competitive robustness to capitalization errors while making negligible compromise to its performance on well-formed text and significantly improving generalization power on noisy user-generated text. Our experiments clearly and consistently validate our claim across different types of machine learning models, languages, and dataset sizes.

2017

pdf bib
Deep Active Learning for Named Entity Recognition
Yanyao Shen | Hyokun Yun | Zachary Lipton | Yakov Kronrod | Animashree Anandkumar
Proceedings of the 2nd Workshop on Representation Learning for NLP

Deep neural networks have advanced the state of the art in named entity recognition. However, under typical training procedures, advantages over classical methods emerge only with large datasets. As a result, deep learning is employed only when large public datasets or a large budget for manually labeling data is available. In this work, we show otherwise: by combining deep learning with active learning, we can outperform classical methods even with a significantly smaller amount of training data.

2016

pdf bib
WordRank: Learning Word Embeddings via Robust Ranking
Shihao Ji | Hyokun Yun | Pinar Yanardag | Shin Matsushima | S. V. N. Vishwanathan
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing