MS2SL: Multimodal Spoken Data-Driven Continuous Sign Language Production

Jian Ma, Wenguan Wang, Yi Yang, Feng Zheng


Abstract
Sign language understanding has made significant strides; however, there is still no viable solution for generating sign sequences directlyfrom entire spoken content, e.g., text or speech. In this paper, we propose a unified framework for continuous sign language production, easing communication between sign and non-sign language users. In particular, a sequence diffusion model, utilizing embeddings extracted from text or speech, is crafted to generate sign predictions step by step. Moreover, by creating a joint embedding space for text, audio, and sign, we bind these modalities and leverage the semantic consistency among them to provide informative feedback for the model training. This embedding-consistency learning strategy minimizes the reliance on sign triplets and ensures continuous model refinement, evenwith a missing audio modality. Experiments on How2Sign and PHOENIX14T datasets demonstrate that our model achieves competitive performance in sign language production.
Anthology ID:
2024.findings-acl.432
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7241–7254
Language:
URL:
https://aclanthology.org/2024.findings-acl.432
DOI:
Bibkey:
Cite (ACL):
Jian Ma, Wenguan Wang, Yi Yang, and Feng Zheng. 2024. MS2SL: Multimodal Spoken Data-Driven Continuous Sign Language Production. In Findings of the Association for Computational Linguistics ACL 2024, pages 7241–7254, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
MS2SL: Multimodal Spoken Data-Driven Continuous Sign Language Production (Ma et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.432.pdf