Jaeyeon Bae
2023
Sound of Story: Multi-modal Storytelling with Audio
Jaeyeon Bae
|
Seokhoon Jeong
|
Seokun Kang
|
Namgi Han
|
Jae-Yon Lee
|
Hyounghun Kim
|
Taehwan Kim
Findings of the Association for Computational Linguistics: EMNLP 2023
Storytelling is multi-modal in the real world. When one tells a story, one may use all of the visualizations and sounds along with the story itself. However, prior studies on storytelling datasets and tasks have paid little attention to sound even though sound also conveys meaningful semantics of the story. Therefore, we propose to extend story understanding and telling areas by establishing a new component called background sound which is story context-based audio without any linguistic information. For this purpose, we introduce a new dataset, called Sound of Story (SoS), which has paired image and text sequences with corresponding sound or background music for a story. To the best of our knowledge, this is the largest well-curated dataset for storytelling with sound. Our SoS dataset consists of 27,354 stories with 19.6 images per story and 984 hours of speech-decoupled audio such as background music and other sounds. As benchmark tasks for storytelling with sound and the dataset, we propose retrieval tasks between modalities, and audio generation tasks from image-text sequences, introducing strong baselines for them. We believe the proposed dataset and tasks may shed light on the multi-modal understanding of storytelling in terms of sound.
Search
Co-authors
- Seokhoon Jeong 1
- Seokun Kang 1
- Namgi Han 1
- Jae-Yon Lee 1
- Hyounghun Kim 1
- show all...