Seunghee Han
2024
Where Visual Speech Meets Language: VSP-LLM Framework for Efficient and Context-Aware Visual Speech Processing
Jeonghun Yeo
|
Seunghee Han
|
Minsu Kim
|
Yong Man Ro
Findings of the Association for Computational Linguistics: EMNLP 2024
In visual speech processing, context modeling capability is one of the most important requirements due to the ambiguous nature of lip movements. For example, homophenes, words that share identical lip movements but produce different sounds, can be distinguished by considering the context. In this paper, we propose a novel framework, namely Visual Speech Processing incorporated with LLMs (VSP-LLM), to maximize the context modeling ability by bringing the overwhelming power of LLMs. Specifically, VSP-LLM is designed to perform multi-tasks of visual speech recognition and translation, where the given instructions control the type of task. The input video is mapped to the input latent space of an LLM by employing a self-supervised visual speech model. Focused on the fact that there is redundant information in input frames, we propose a novel deduplication method that reduces the embedded visual features by employing visual speech units. Through the proposed deduplication and low rank adaptation, VSP-LLM can be trained in a computationally efficient manner. In the translation dataset, the MuAViC benchmark, we demonstrate that VSP-LLM trained on just 30 hours of labeled data can more effectively translate compared to the recent model trained with 433 hours of data.
Constructing Korean Learners’ L2 Speech Corpus of Seven Languages for Automatic Pronunciation Assessment
Seunghee Han
|
Sunhee Kim
|
Minhwa Chung
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Multilingual L2 speech corpora for developing automatic speech assessment are currently available, but they lack comprehensive annotations of L2 speech from non-native speakers of various languages. This study introduces the methodology of designing a Korean learners’ L2 speech corpus of seven languages: English, Japanese, Chinese, French, German, Spanish, and Russian. We describe the development of reading scripts, reading tasks, scoring criteria, and expert evaluation methods in detail. Our corpus contains 1,200 hours of L2 speech data from Korean learners (400 hours for English, 200 hours each for Japanese and Chinese, 100 hours each for French, German, Spanish, and Russian). The corpus is annotated with spelling and pronunciation transcription, expert pronunciation assessment scores (accuracy of pronunciation and fluency of prosody), and metadata such as gender, age, self-reported language proficiency, and pronunciation error types. We also propose a practical verification method and a reliability threshold to ensure the reliability and objectivity of large-scale subjective evaluation data.