Kohtaro Tanaka
2024
Content-Specific Humorous Image Captioning Using Incongruity Resolution Chain-of-Thought
Kohtaro Tanaka
|
Kohei Uehara
|
Lin Gu
|
Yusuke Mukuta
|
Tatsuya Harada
Findings of the Association for Computational Linguistics: NAACL 2024
Although automated image captioning methods have benefited considerably from the development of large language models (LLMs), generating humorous captions is still a challenging task. Humorous captions generated by humans are unique to the image and reflect the content of the image. However, captions generated using previous captioning models tend to be generic. Therefore, we propose incongruity-resolution chain-of-thought (IRCoT) as a novel prompting framework that creates content-specific resolutions from fine details extracted from an image. Furthermore, we integrate logit bias and negative sampling to suppress the output of generic resolutions. The results of experiments with GPT4-V demonstrate that our proposed framework effectively generated humorous captions tailored to the content of specific input images.
2022
Learning to Evaluate Humor in Memes Based on the Incongruity Theory
Kohtaro Tanaka
|
Hiroaki Yamane
|
Yusuke Mori
|
Yusuke Mukuta
|
Tatsuya Harada
Proceedings of the Second Workshop on When Creative AI Meets Conversational AI
Memes are a widely used means of communication on social media platforms, and are known for their ability to “go viral”. In prior works, researchers have aimed to develop an AI system to understand humor in memes. However, existing methods are limited by the reliability and consistency of the annotations in the dataset used to train the underlying models. Moreover, they do not explicitly take advantage of the incongruity between images and their captions, which is known to be an important element of humor in memes. In this study, we first gathered real-valued humor annotations of 7,500 memes through a crowdwork platform. Based on this data, we propose a refinement process to extract memes that are not influenced by interpersonal differences in the perception of humor and a method designed to extract and utilize incongruities between images and captions. The results of an experimental comparison with models using vision and language pretraining models show that our proposed approach outperformed other models in a binary classification task of evaluating whether a given meme was humorous.
Search
Co-authors
- Yusuke Mukuta 2
- Tatsuya Harada 2
- Hiroaki Yamane 1
- Yusuke Mori 1
- Kohei Uehara 1
- show all...
- Lin Gu 1