Data Augmentation for Sign Language Gloss Translation

Amit Moryossef, Kayo Yin, Graham Neubig, Yoav Goldberg


Abstract
Sign language translation (SLT) is often decomposed into video-to-gloss recognition and gloss to-text translation, where a gloss is a sequence of transcribed spoken-language words in the order in which they are signed. We focus here on gloss-to-text translation, which we treat as a low-resource neural machine translation (NMT) problem. However, unlike traditional low resource NMT, gloss-to-text translation differs because gloss-text pairs often have a higher lexical overlap and lower syntactic overlap than pairs of spoken languages. We exploit this lexical overlap and handle syntactic divergence by proposing two rule-based heuristics that generate pseudo-parallel gloss-text pairs from monolingual spoken language text. By pre-training on this synthetic data, we improve translation from American Sign Language (ASL) to English and German Sign Language (DGS) to German by up to 3.14 and 2.20 BLEU, respectively.
Anthology ID:
2021.mtsummit-at4ssl.1
Volume:
Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL)
Month:
August
Year:
2021
Address:
Virtual
Editor:
Dimitar Shterionov
Venue:
MTSummit
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
1–11
Language:
URL:
https://aclanthology.org/2021.mtsummit-at4ssl.1
DOI:
Bibkey:
Cite (ACL):
Amit Moryossef, Kayo Yin, Graham Neubig, and Yoav Goldberg. 2021. Data Augmentation for Sign Language Gloss Translation. In Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), pages 1–11, Virtual. Association for Machine Translation in the Americas.
Cite (Informal):
Data Augmentation for Sign Language Gloss Translation (Moryossef et al., MTSummit 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.mtsummit-at4ssl.1.pdf
Data
RWTH-PHOENIX-Weather 2014 T