Seunghyun Hwang
2024
Kiss up, Kick down: Exploring Behavioral Changes in Multi-modal Large Language Models with Assigned Visual Personas
Seungjong Sun
|
Eungu Lee
|
Seo Yeon Baek
|
Seunghyun Hwang
|
Wonbyung Lee
|
Dongyan Nan
|
Bernard J Jansen
|
Jang Hyun Kim
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This study is the first to explore whether multi-modal large language models (LLMs) can align their behaviors with visual personas, addressing a significant gap in the literature that predominantly focuses on text-based personas. We developed a novel dataset of 5K fictional avatar images for assignment as visual personas to LLMs, and analyzed their negotiation behaviors based on the visual traits depicted in these images, with a particular focus on aggressiveness. The results indicate that LLMs assess the aggressiveness of images in a manner similar to humans and output more aggressive negotiation behaviors when prompted with an aggressive visual persona. Interestingly, the LLM exhibited more aggressive negotiation behaviors when the opponent’s image appeared less aggressive than their own, and less aggressive behaviors when the opponent’s image appeared more aggressive.
2023
Adapting Text-based Dialogue State Tracker for Spoken Dialogues
Jaeseok Yoon
|
Seunghyun Hwang
|
Han Ran
|
Jeong-Uk Bang
|
Kee-Eung Kim
Proceedings of The Eleventh Dialog System Technology Challenge
Although there have been remarkable advances in dialogue systems through the dialogue systems technology competition (DSTC), it remains one of the key challenges to building a robust task-oriented dialogue system with a speech interface. Most of the progress has been made for text-based dialogue systems since there are abundant datasets with written cor- pora while those with spoken dialogues are very scarce. However, as can be seen from voice assistant systems such as Siri and Alexa, it is of practical importance to transfer the success to spoken dialogues. In this paper, we describe our engineering effort in building a highly successful model that participated in the speech-aware dialogue systems technology challenge track in DSTC11. Our model consists of three major modules: (1) automatic speech recognition error correction to bridge the gap between the spoken and the text utterances, (2) text-based dialogue system (D3ST) for estimating the slots and values using slot descriptions, and (3) post-processing for recovering the error of the estimated slot value. Our experiments show that it is important to use an explicit automatic speech recognition error correction module, post-processing, and data augmentation to adapt a text-based dialogue state tracker for spoken dialogue corpora.
Search
Co-authors
- Seungjong Sun 1
- Eungu Lee 1
- Seo Yeon Baek 1
- Wonbyung Lee 1
- Dongyan Nan 1
- show all...