Analysis of Torso Movement for Signing Avatar Using Deep Learning

Shatabdi Choudhury


Abstract
Avatars are virtual or on-screen representations of a human used in various roles for sign language display, including translation and educational tools. Though the ability of avatars to portray acceptable sign language with believable human-like motion has improved in recent years, many still lack the naturalness and supporting motions of human signing. Such details are generally not included in the linguistic annotation. Nevertheless, these motions are highly essential to displaying lifelike and communicative animations. This paper presents a deep learning model for use in a signing avatar. The study focuses on coordinating torso movements and other human body parts. The proposed model will automatically compute the torso rotation based on the avatar’s wrist positions. The resulting motion can improve the user experience and engagement with the avatar.
Anthology ID:
2022.sltat-1.2
Volume:
Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, John C. McDonald, Dimitar Shterionov, Rosalee Wolfe
Venue:
SLTAT
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
7–12
Language:
URL:
https://aclanthology.org/2022.sltat-1.2
DOI:
Bibkey:
Cite (ACL):
Shatabdi Choudhury. 2022. Analysis of Torso Movement for Signing Avatar Using Deep Learning. In Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives, pages 7–12, Marseille, France. European Language Resources Association.
Cite (Informal):
Analysis of Torso Movement for Signing Avatar Using Deep Learning (Choudhury, SLTAT 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.sltat-1.2.pdf