Seungbeen Lee


2024

pdf bib
Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you!
Jiwan Chung | Seungwon Lim | Jaehyun Jeon | Seungbeen Lee | Youngjae Yu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Humans possess multimodal literacy, allowing them to actively integrate information from various modalities to form reasoning. Faced with challenges like lexical ambiguity in text, we supplement this with other modalities, such as thumbnail images or textbook illustrations. Is it possible for machines to achieve a similar multimodal understanding capability?In response, we present Understanding Pun with Image Explanations (UNPIE), a novel benchmark designed to assess the impact of multimodal inputs in resolving lexical ambiguities. Puns serve as the ideal subject for this evaluation due to their intrinsic ambiguity. Our dataset includes 1,000 puns, each accompanied by an image that explains both meanings. We pose three multimodal challenges with the annotations to assess different aspects of multimodal literacy; Pun Grounding, Disambiguation, and Reconstruction. The results indicate that various Socratic Models and Visual-Language Models improve over the text-only models when given visual context, particularly as the complexity of the tasks increases.

pdf bib
Cactus: Towards Psychological Counseling Conversations using Cognitive Behavioral Theory
Suyeon Lee | Sunghwan Kim | Minju Kim | Dongjin Kang | Dongil Yang | Harim Kim | Minseok Kang | Dayi Jung | Min Hee Kim | Seungbeen Lee | Kyong-Mee Chung | Youngjae Yu | Dongha Lee | Jinyoung Yeo
Findings of the Association for Computational Linguistics: EMNLP 2024

Recently, the demand for psychological counseling has significantly increased as more individuals express concerns about their mental health. This surge has accelerated efforts to improve the accessibility of counseling by using large language models (LLMs) as counselors. To ensure client privacy, training open-source LLMs faces a key challenge: the absence of realistic counseling datasets. To address this, we introduce Cactus, a multi-turn dialogue dataset that emulates real-life interactions using the goal-oriented and structured approach of Cognitive Behavioral Therapy (CBT).We create a diverse and realistic dataset by designing clients with varied, specific personas, and having counselors systematically apply CBT techniques in their interactions. To assess the quality of our data, we benchmark against established psychological criteria used to evaluate real counseling sessions, ensuring alignment with expert evaluations.Experimental results demonstrate that Camel, a model trained with Cactus, outperforms other models in counseling skills, highlighting its effectiveness and potential as a counseling agent.We make our data, model, and code publicly available.