Tae-Hyun Oh
2024
SMILE: Multimodal Dataset for Understanding Laughter in Video with Language Models
Lee Hyun
|
Kim Sung-Bin
|
Seungju Han
|
Youngjae Yu
|
Tae-Hyun Oh
Findings of the Association for Computational Linguistics: NAACL 2024
Despite the recent advances in artificial intelligence, building social intelligence remains a challenge.Among social signals, laughter is one of the distinctive expressions that occurs during social interactions between humans.In this work, we tackle a new challenge for machines to understand the rationale behind laughter in video, Video Laugh Reasoning.We introduce this new task to explain why people laugh in a particular video and a dataset for this task.Our proposed dataset, SMILE, comprises video clips and language descriptions of why people laugh. We propose a baseline by leveraging the reasoning capacity of large language models (LLMs) with textual video representation. Experiments show that our baseline can generate plausible explanations for laughter. We further investigate the scalability of our baseline by probing other video understanding tasks and in-the-wild videos. We release our dataset, code, and model checkpoints on https://github.com/postech-ami/SMILE-Dataset.
2019
Image Captioning with Very Scarce Supervised Data: Adversarial Semi-Supervised Learning Approach
Dong-Jin Kim
|
Jinsoo Choi
|
Tae-Hyun Oh
|
In So Kweon
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Constructing an organized dataset comprised of a large number of images and several captions for each image is a laborious task, which requires vast human effort. On the other hand, collecting a large number of images and sentences separately may be immensely easier. In this paper, we develop a novel data-efficient semi-supervised framework for training an image captioning model. We leverage massive unpaired image and caption data by learning to associate them. To this end, our proposed semi-supervised learning method assigns pseudo-labels to unpaired samples via Generative Adversarial Networks to learn the joint distribution of image and caption. To evaluate, we construct scarcely-paired COCO dataset, a modified version of MS COCO caption dataset. The empirical results show the effectiveness of our method compared to several strong baselines, especially when the amount of the paired samples are scarce.
Search
Co-authors
- Lee Hyun 1
- Kim Sung-Bin 1
- Seungju Han 1
- Youngjae Yu 1
- Dong-Jin Kim 1
- show all...