Minjie Hong
2024
AudioVSR: Enhancing Video Speech Recognition with Audio Data
Xiaoda Yang
|
Xize Cheng
|
Jiaqi Duan
|
Hongshun Qiu
|
Minjie Hong
|
Minghui Fang
|
Shengpeng Ji
|
Jialong Zuo
|
Zhiqing Hong
|
Zhimeng Zhang
|
Tao Jin
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Visual Speech Recognition (VSR) aims to predict spoken content by analyzing lip movements in videos. Recently reported state-of-the-art results in VSR often rely on increasingly large amounts of video data, while the publicly available transcribed video datasets are insufficient compared to the audio data. To further enhance the VSR model using the audio data, we employed a generative model for data inflation, integrating the synthetic data with the authentic visual data. Essentially, the generative model incorporates another insight, which enhances the capabilities of the recognition model. For the cross-language issue, previous work has shown poor performance with non-Indo-European languages. We trained a multi-language-family modal fusion model, AudioVSR. Leveraging the concept of modal transfer, we achieved significant results in downstream VSR tasks under conditions of data scarcity. To the best of our knowledge, AudioVSR represents the first work on cross-language-family audio-lip alignment, achieving a new SOTA in the cross-language scenario.
Search
Co-authors
- Xiaoda Yang 1
- Xize Cheng 1
- Jiaqi Duan 1
- Hongshun Qiu 1
- Minghui Fang 1
- show all...