Hyunkuk Lim


2024

pdf bib
TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in Conversation
Taeyang Yun | Hyunkuk Lim | Jeonghwan Lee | Min Song
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Emotion Recognition in Conversation (ERC) plays a crucial role in enabling dialogue sys- tems to effectively respond to user requests. The emotions in a conversation can be identi- fied by the representations from various modal- ities, such as audio, visual, and text. How- ever, due to the weak contribution of non-verbal modalities to recognize emotions, multimodal ERC has always been considered a challenging task. In this paper, we propose Teacher-leading Multimodal fusion network for ERC (TelME). TelME incorporates cross-modal knowledge distillation to transfer information from a lan- guage model acting as the teacher to the non- verbal students, thereby optimizing the efficacy of the weak modalities. We then combine multi- modal features using a shifting fusion approach in which student networks support the teacher. TelME achieves state-of-the-art performance in MELD, a multi-speaker conversation dataset for ERC. Finally, we demonstrate the effec- tiveness of our components through additional experiments.

pdf bib
Efficient Terminology Integration for LLM-based Translation in Specialized Domains
Sejoon Kim | Mingi Sung | Jeonghwan Lee | Hyunkuk Lim | Jorge Gimenez Perez
Proceedings of the Ninth Conference on Machine Translation

Traditional machine translation methods typically involve training models directly on large parallel corpora, with limited emphasis on specialized terminology. However, In specialized fields such as patents, finance, biomedical domains, terminology is crucial for translation, with many terminologies that should not be translated based on semantics of the sentence but should be translated following agreed-upon conventions. In this paper we introduce a methodology that efficiently trains models with a smaller amount of data while preserving the accuracy of terminology translation. The terminology extraction model generates a glossary from existing training datasets and further refines the LLM by instructing it to effectively incorporate these terms into translations. We achieve this through a systematic process of term extraction and glossary creation using the Trie Tree algorithm, followed by data reconstruction to teach the LLM how to integrate these specialized terms. This methodology enhances the model’s ability to handle specialized terminology and ensures high-quality translations, particularly in fields where term consistency is crucial. Our approach has demonstrated exceptional performance, achieving the highest translation score among participants in the WMT patent task to date, showcasing its effectiveness and broad applicability in specialized translation domains where general methods often fall short.