Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources

Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, Julie A. Hochgesang, Jette Kristoffersen, Johanna Mesch, Marc Schulder (Editors)


Anthology ID:
2022.signlang-1
Month:
June
Year:
2022
Address:
Marseille, France
Venue:
SignLang
SIG:
Publisher:
European Language Resources Association
URL:
https://aclanthology.org/2022.signlang-1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2022.signlang-1.pdf

pdf bib
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources
Eleni Efthimiou | Stavroula-Evita Fotinea | Thomas Hanke | Julie A. Hochgesang | Jette Kristoffersen | Johanna Mesch | Marc Schulder

pdf bib
PeruSIL: A Framework to Build a Continuous Peruvian Sign Language Interpretation Dataset
Gissella Bejarano | Joe Huamani-Malca | Francisco Cerna-Herrera | Fernando Alva-Manchego | Pablo Rivas

Video-based datasets for Continuous Sign Language are scarce due to the challenging task of recording videos from native signers and the reduced number of people who can annotate sign language. COVID-19 has evidenced the key role of sign language interpreters in delivering nationwide health messages to deaf communities. In this paper, we present a framework for creating a multi-modal sign language interpretation dataset based on videos and we use it to create the first dataset for Peruvian Sign Language (LSP) interpretation annotated by hearing volunteers who have intermediate knowledge of PSL guided by the video audio. We rely on hearing people to produce a first version of the annotations, which should be reviewed by native signers in the future. Our contributions: i) we design a framework to annotate a sign Language dataset; ii) we release the first annotated LSP multi-modal interpretation dataset (AEC); iii) we evaluate the annotation done by hearing people by training a sign language recognition model. Our model reaches up to 80.3% of accuracy among a minimum of five classes (signs) AEC dataset, and 52.4% in a second dataset. Nevertheless, analysis by subject in the second dataset show variations worth to discuss.

pdf bib
Introducing Sign Languages to a Multilingual Wordnet: Bootstrapping Corpora and Lexical Resources of Greek Sign Language and German Sign Language
Sam Bigeard | Marc Schulder | Maria Kopf | Thomas Hanke | Kyriaki Vasilaki | Anna Vacalopoulou | Theodore Goulas | Athanasia-Lida Dimou | Stavroula-Evita Fotinea | Eleni Efthimiou

Wordnets have been a popular lexical resource type for many years. Their sense-based representation of lexical items and numerous relation structures have been used for a variety of computational and linguistic applications. The inclusion of different wordnets into multilingual wordnet networks has further extended their use into the realm of cross-lingual research. Wordnets have been released for many spoken languages. Research has also been carried out into the creation of wordnets for several sign languages, but none have yet resulted in publicly available datasets. This article presents our own efforts towards an inclusion of sign languages in a multilingual wordnet, starting with Greek Sign Language (GSL) and German Sign Language (DGS). Based on differences in available language resources between GSL and DGS, we trial two workflows with different coverage priorities. We also explore how synergies between both workflows can be leveraged and how future work on additional sign languages could profit from building on existing sign language wordnet data. The results of our work are made publicly available.

pdf bib
Introducing the signglossR Package
Carl Börstell

The signglossR package is a library written in the programming language R, intended as an easy-to-use resource for those who work with signed language data and are familiar with R. The package contains a variety of functions designed specifically towards signed language research, facilitating a single-pipeline workflow with R when accessing public language resources remotely (online) or a user’s own files and data. The package specifically targets processing of image and video files, but also features some interaction with software commonly used by researchers working on signed language and gesture, such as ELAN and OpenPose. The signglossR package combines features and functionality from many other libraries and tools in order to simplify and collect existing resources in one place, as well as adding some new functionality, and adapt everything to the needs of researchers working with visual language data. In this paper, the main features of this package are introduced.

pdf bib
Moving towards a Functional Approach in the Flemish Sign Language Dictionary Making Process
Caro Brosens | Margot Janssens | Sam Verstraete | Thijs Vandamme | Hannes De Durpel

This presentation will outline the dictionary making process of the new online Flemish Sign Language dictionary launched in 2019. First some necessary background information is provided, consisting of a brief history of Flemish Sign Language (VGT) lexicography. Then three phases in the development of the renewed dictionary of VGT will be explored: (i) user research, (ii) data-cleaning and modeling, and (iii) innovations. More than wanting to project a report of lexicographic research on a website, the goal was to make the new dictionary a practical, user-friendly reference tool that meets the needs, expectations, and skills of the dictionary users. To gain a better understanding of who the users were, several sources were consulted: the user research by Joni Oyserman (2013), the quantitative data from Google Analytics and VGTC’s own user profiles. Since 2017, VGTC has been using Signbank, an electronic database specifically developed to compile and manage lexicographic data for sign languages. Bringing together all this raw data inadvertently led to inconsistencies and small mistakes, therefore the data had to be manually revised and complemented. The VGT dictionary was mainly formally modernized, but there are also several substantive differences regarding the previous dictionary: for instance, search options were expanded, and semantic categories were added as well as a new feedback feature. In addition, the new website is also structurally different, it is now responsive to all screen sizes. Lastly, possible future innovations will briefly be discussed. VGTC aims to continuously improve both the user-based interface and the content of the current dictionary. Future goals include, but are not limited to, adding definitions and sample sentences (preferably extracted from the corpus), as well as information on the etymology and common use of signs.

pdf bib
Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study
Anastasia Chizhikova | Vadim Kimmelman

We analyzed negative headshake found in the online corpus of Russian Sign Language. We found that negative headshake can co-occur with negative manual signs, although most of these signs are not accompanied by it. We applied OpenFace, a Computer Vision toolkit, to extract head rotation measurements from video recordings, and analyzed the headshake in terms of the number of peaks (turns), the amplitude of the turns, and their frequency. We find that such basic phonetic measurements of headshake can be extracted using a combination of manual annotation and Computer Vision, and can be further used in comparative research across constructions and sign languages.

pdf bib
Documenting the Use of Iranian Sign Language (ZEI) in Kermanshah
Yassaman Choubsaz | Onno Crasborn | Sara Siyavoshi | Farzaneh Soleimanbeigi

We describe a sign language documentation project funded by the Endangered Languages Documentation Project (ELDP) in the province of Kermanshah, a city in west of Iran. The deposit at ELDP archive (elararchive.org) includes recording of 38 native signers of Zaban Eshareh Irani living in Kermanshah. The recordings start with an elicitation of the signs of the Farsi alphabet along with fingerspelling of some words as well as vocabulary elicitation of some basic concepts. Subsequently, the participants are asked to watch short movies and then they are asked to retell the story. Later, the participants have natural conversations in pairs guided by a deaf moderator. Initial annotations of ID-glosses and translations to Persian and English were also archived. ID-glosses are stored as a dataset in Global Signbank, along with a citation form of signs and their phonological description. The resulting datasets and one-hour annotation of the conversations are available to other researchers in ELDP archive.

pdf bib
Applying the Transcription System Typannot to Mouth Gestures
Claire Danet | Chloé Thomas | Adrien Contesse | Morgane Rébulard | Claudia S. Bianchini | Léa Chevrefils | Patrick Doan

Research on sign languages (SLs) requires dedicated, efficient and comprehensive transcription systems to analyze and compare the sign parameters; at present, many transcription systems focus on manual parameters, relegating the non-manual component to a lesser role. This article presents Typannot, a formal transcription system, and in particular its application to mouth gestures: 1) first, exposing its kinesiological approach, i.e. an intrinsic articulatory description anchored in the body; 2) then, showing its conception to integrate linguistic, graphic and technical aspects within a typeface; 3) finally, presenting its application to a corpus in French Sign Language (LSF) recorded with motion capture.

pdf bib
Libras Portal: A Way of Documentation, a Way of Sharing
Ronice de Quadros | Renata Krusser | Daniela Saito

Libras Portal is an interface that makes available in one single site a series of elements and tools related to the Brazilian Sign Language (Libras) and comprises Libras documentation which may be employed for research and for educational aims. Libras Portal was developed to codify tools that prop an education network and practice community, making possible the sharing of knowledge, data, and interaction in Libras and Portuguese. It involves accessibility and usability of the web, especially videos in Libras. The latter are access-friendly to available hyperlinks and tools related to communication with the target practice community. The layout also employs visual and textual resources for deaf users. The portal makes available resources for research and the teaching of language, namely Libras Grammar, Libras corpus, Sign Bank, and Literary Anthology of Libras. It is also a store for the sharing of literary, academic, and didactic materials, courses, glossaries, anthologies, lesson models, and grammar analyses. Consequently, tools were developed for the accessibility of deaf people, for easy web browsing, index information, video upload, research, and development of products for communities of deaf people. The current paper will describe the development of research and resources for accessibility.

pdf bib
Representation and Synthesis of Geometric Relocations
Michael Filhol | John McDonald

One of the key features of signed discourse is the geometric placements of gestural units in signing space. Signers use the geometry of signing space to describe the placements and forms of objects and also use it to contrast participants or locales in a story. Depending on the specific functions of the placement in the discourse, features such as geometric precision, gaze redirection and timing will all differ. A signing avatar must capture these differences to sign such discourse naturally. This paper builds on prior work that animated geometric depictions to enable a signing avatar to more naturally use signing space for opposing participants and concepts in discourse. Building from a structured linguistic description of a signed newscast, they system automatically synthesizes animation that correctly utilizes signing space to lay out the opposing locales in the report. The efficacy of the approach is demonstrated through comparisons of the avatar’s motion with the source signing.

pdf bib
Sign Language Phonetic Annotator-Analyzer: Open-Source Software for Form-Based Analysis of Sign Languages
Kathleen Currie Hall | Yurika Aonuki | Kaili Vesik | April Poy | Nico Tolmie

This paper provides an introduction to the Sign Language Phonetic Annotator-Analyzer (SLP-AA) software, a free and open-source tool currently under development, for facilitating detailed form-based transcription of signs. The software is designed to have a user-friendly interface that allows coders to transcribe a great deal of phonetic detail without being constrained to a particular phonetic annotation system or phonological framework. Here, we focus on the ‘annotator’ component of the software, outlining the functionality for transcribing movement, location, hand configuration, orientation, and contact, as well as the timing relations between them.

pdf bib
ASL-Homework-RGBD Dataset: An Annotated Dataset of 45 Fluent and Non-fluent Signers Performing American Sign Language Homeworks
Saad Hassan | Matthew Seita | Larwan Berke | Yingli Tian | Elaine Gale | Sooyeon Lee | Matt Huenerfauth

We are releasing a dataset containing videos of both fluent and non-fluent signers using American Sign Language (ASL), which were collected using a Kinect v2 sensor. This dataset was collected as a part of a project to develop and evaluate computer vision algorithms to support new technologies for automatic detection of ASL fluency attributes. A total of 45 fluent and non-fluent participants were asked to perform signing homework assignments that are similar to the assignments used in introductory or intermediate level ASL courses. The data is annotated to identify several aspects of signing including grammatical features and non-manual markers. Sign language recognition is currently very data-driven and this dataset can support the design of recognition technologies, especially technologies that can benefit ASL learners. This dataset might also be interesting to ASL education researchers who want to contrast fluent and non-fluent signing.

pdf bib
MY DGSANNIS: ANNIS and the Public DGS Corpus
Amy Isard | Reiner Konrad

In 2018 the DGS-Korpus project published the first full release of the Public DGS Corpus. The data have already been published in two different ways to fulfil the needs of different user groups, and we have now published the third portal MY DGS – ANNIS using the ANNIS browser-based corpus software. ANNIS is a corpus query tool for visualization and querying of multi-layer corpus data. It has its own query language, AQL, and is accessed from a web browser without requiring a login. It allows more complex queries and visualizations than those provided by the existing research portal. We introduce ANNIS and its query language AQL, describe the structure of MY DGS – ANNIS, and give some example queries. The use cases with queries over multiple annotation tiers and metadata illustrate the research potential of this powerful tool and show how students and researchers can explore the Public DGS Corpus.

pdf bib
Outreach and Science Communication in the DGS-Korpus Project: Accessibility of Data and the Benefit of Interactive Exchange between Communities
Elena Jahn | Calvin Khan | Annika Herrmann

In this paper, we tackle the issues of science communication and dissemination within a sign language corpus project with a focus on spreading accessible information and involving the D/deaf community on various levels. We will discuss successful examples, challenges, and limitations to public relations in such a project and particularly elaborate on use cases. The focus group is presented as a best-practice example of a what we think is a necessary perspective: taking external knowledge seriously and let community experts interact with and provide feedback on a par with academic personnel. Showing both social media and on-site events, we present some exemplary approaches from our team involved in public relations. Keywords: public relations, science communication, sign language community, DGS-Korpus project

pdf bib
MC-TRISLAN: A Large 3D Motion Capture Sign Language Data-set
Pavel Jedlička | Zdeněk Krňoul | Milos Zelezny | Ludek Muller

The new 3D motion capture data corpus expands the portfolio of existing language resources by a corpus of 18 hours of Czech sign language. This helps to alleviate the current problem, which is a critical lack of high quality data necessary for research and subsequent deployment of machine learning techniques in this area. We currently provide the largest collection of annotated sign language recordings acquired by state-of-the-art 3D human body recording technology for the successful future deployment in communication technologies, especially machine translation and sign language synthesis.

pdf bib
A Machine Learning-based Segmentation Approach for Measuring Similarity between Sign Languages
Tonni Das Jui | Gissella Bejarano | Pablo Rivas

Due to the lack of more variate, native and continuous datasets, sign languages are low-resources languages that can benefit from multilingualism in machine translation. In order to analyze the benefits of approaches like multilingualism, finding the similarity between sign languages can guide better matches and contributions between languages. However, calculating the similarity between sign languages again implies a laborious work to measure how close or distant signs are and their respective contexts. For that reason, we propose to support the similarity measurement between sign languages through a video-segmentation-based machine learning model that will quantify this match among signs of different countries’ sign languages. Using a machine learning approach the similarity measurement process can run more smoothly, compared to a more manual approach. We use a pre-trained temporal segmentation model for British Sign Language (BSL). We test it on three datasets, an American Sign Language (ASL) dataset, an Indian Sign Language (ISL), and an Australian Sign Language (AUSLAN) dataset. We hypothesize that the percentage of segmented and recognized signs by this machine learning model can represent the percentage of overlap or similarity between British and the other three sign languages. In our ongoing work, we evaluate three metrics considering Swadesh’s and Woodward’s list and their synonyms. We found that our intermediate-strict metric coincides with a more classical analysis of the similarity between British and American Sign Language, as well as with the classical low measurement between Indian and British sign languages. On the other hand, our similarity measurement between British and Australian Sign language just holds for part of the Australian Sign Language and not the whole data sample.

pdf bib
The Sign Language Dataset Compendium: Creating an Overview of Digital Linguistic Resources
Maria Kopf | Marc Schulder | Thomas Hanke

One of the challenges that sign language researchers face is the identification of suitable language datasets, particularly for cross-lingual studies. There is no single source of information on what sign language corpora and lexical resources exist or how they compare. Instead, they have to be found through extensive literature review or word-of-mouth. The amount of information available on individual datasets can also vary widely and may be distributed across different publications, data repositories and (potentially defunct) project websites. This article introduces the Sign Language Dataset Compendium, an extensive overview of linguistic resources for sign languages. It covers existing corpora and lexical resources, as well as commonly used data collection tasks. Special attention is paid to covering resources for many different languages from around the globe. All information is provided in a standardised format to make entries comparable, but kept flexible enough to allow for differences in content. The compendium is intended as a growing resource that will be updated regularly.

pdf bib
Making Sign Language Corpora Comparable: A Study of Palm-Up and Throw-Away in Polish Sign Language, German Sign Language, and Russian Sign Language
Anna Kuder

This paper is primarily devoted to describing the preparation phase of a large-scale comparative study based on naturalistic linguistic data drawn from multiple sign language corpora. To provide an example, I am using my current project on manual gestural elements in Polish Sign Language, German Sign Language, and Russian Sign Language. The paper starts with a description of the reasons behind undertaking this project. Then, I describe the scope of my study, which is focused on two manual elements present in all three mentioned sign languages: palm-up and throw-away; and the three corpora which are my data sources. This is followed by a presentation of the steps taken in the initial stages of the project in order to make the data comparable. Those steps are: choosing the adequate data samples from all three corpora, gathering all data within the chosen software, and creating an annotation schema that builds on the annotations already present in all three corpora. Even though the project is still underway, and the annotation process is ongoing, preliminary discussions about the nature of the analysed manual activities are presented based on the initial annotations for the sake of evaluating the created annotation schema. I conclude the paper with some remarks about the performance of the employed methodology.

pdf bib
Open Repository of the Polish Sign Language Corpus: Publication Project of the Polish Sign Language Corpus
Anna Kuder | Joanna Wójcicka | Piotr Mostowski | Paweł Rutkowski

Between 2010 and 2020, the research team of the Section for Sign Linguistics collected, annotated, and translated a large corpus of Polish Sign Language (polski język migowy, PJM). After this task was finished, a substantial part of the gathered materials was published online as the Open Repository of the Polish Sign Language Corpus. The current paper gives an overview of the process of converting the material from the Corpus into the Repository. If presents and explains the decisions made along the way and describes the process of data preparation and publication. There are two levels of access to the Repository, which are meant to fulfil the needs of a wide range of public users, from members of the Deaf community, through hearing students of PJM, sign language teachers and interpreters, to users with academic background. We describe how corpus material available in open access was prepared to be searchable by text type and elicitation tasks, by sociolinguistic metadata, and by translation into written Polish. We go on to explain how access for research purposes differs from open access. We present possible ways in which data gathered in the Repository may be used by members of the signing community in Poland and abroad.

pdf bib
Functional Data Analysis of Non-manual Marking of Questions in Kazakh-Russian Sign Language
Anna Kuznetsova | Alfarabi Imashev | Medet Mukushev | Anara Sandygulova | Vadim Kimmelman

This paper is a continuation of Kuznetsova et al. (2021), which described non-manual markers of polar and wh-questions in comparison with statements in an NLP dataset of Kazakh-Russian Sign Language (KRSL) using Computer Vision. One of the limitations of the previous work was the distortion of the 3D face landmarks when the head was rotated. The proposed solution was to train a simple linear regression model to predict the distortion and then subtract it from the original output. We improve this technique with a multilayer perceptron. Another limitation that we intend to address in this paper is the discrete analysis of the continuous movement of non-manuals. In Kuznetsova et al. (2021) we averaged the value of the non-manual over its scope for statistical analysis. To preserve information on the shape of the movement, in this study we use a statistical tool that is often used in speech research, Functional Data Analysis, specifically Functional PCA.

pdf bib
Two New AZee Production Rules Refining Multiplicity in French Sign Language
Emmanuella Martinod | Claire Danet | Michael Filhol

This paper is a contribution to sign language (SL) modeling. We focus on the hitherto imprecise notion of “Multiplicity”, assumed to express plurality in French Sign Language (LSF), using AZee approach. AZee is a linguistic and formal approach to modeling LSF. It takes into account the linguistic properties and specificities of LSF while respecting constraints linked to a modeling process. We present the methodology to extract AZee production rules. Based on the analysis of strong form-meaning associations in SL data (elicited image descriptions and short news), we identified two production rules structuring the expression of multiplicity in LSF. We explain how these newly extracted production rules are different from existing ones. Our goal is to refine the AZee approach to allow the coverage of a growing part of LSF. This work could lead to an improvement in SL synthesis and SL automatic translation.

pdf bib
Language Planning in Action: Depiction as a Driver of New Terminology in Irish Sign Language
Rachel Moiselle | Lorraine Leeson

In this paper, we examine the linguistic phenomenon known as ‘depiction’, which relates to the ability to visually represent semantic components (Dudis, 2004). While some elements of this have been described for Irish Sign Language, with particular attention to the ‘productive lexicon’ (Leeson & Grehan, 2004; Leeson & Saeed, 2012; Matthews, 1996; O’Baoill & Matthews, 2000), here, we take the analysis further, drawing on what we have learned from cognitive linguistics over the past decade. Drawing on several recently developed domain-specific glossaries (e.g., STEM1, Covid-192, political domain, Sexual, Domestic and Gender Based Violence (SDGBV)-related vocabulary) we present ongoing analysis indicating that a deliberate focus on iconicity, in particular, elements of depiction, appears to be a primary driver. We also consider the potential implications of the insights we intend to gain from Deaf-led glossary glossary development work in the context of Machine Translation goals, for example, for work in progress on the Horizon 2020 funded SignON project.

pdf bib
Facilitating the Spread of New Sign Language Technologies across Europe
Hope Morgan | Onno Crasborn | Maria Kopf | Marc Schulder | Thomas Hanke

For developing sign language technologies like automatic translation, huge amounts of training data are required. Even the larger corpora available for some sign languages are tiny compared to the amounts of data used for corresponding spoken language technologies. The overarching goal of the European project EASIER is to develop a framework for bidirectional automatic translation between sign and spoken languages and between sign languages. One part of this multi-dimensional project is that it will pool available language resources from European sign languages into a larger dataset to address the data scarcity problem. This approach promises to open the floor for lower-resourced sign languages in Europe. This article focusses on efforts in the EASIER project to allow for new languages to make use of such technologies in the future. What are the characteristics of sign language resources needed to train recognition, translation, and synthesis algorithms, and how can other countries including those without any sign resources follow along with these developments? The efforts undertaken in EASIER include creating workflow documents and organizing training sessions in online workshops. They reflect the current state of the art, and will likely need to be updated in the coming decade.

pdf bib
ISL-LEX v.1: An Online Lexical Resource of Israeli Sign Language
Hope Morgan | Wendy Sandler | Rose Stamp | Rama Novogrodsky

This paper describes a new online lexical resource and interactive tool for Israeli Sign Language, ISL-LEX v.1. The dataset contains 961 non-compound ISL signs with the following information: subjective frequency ratings from native signers, iconicity ratings from native and non-native signers (presented separately), and phonological properties in six domains. The selection of signs was also designed to reflect a broad distinction between those signs acquired early in childhood and those acquired later. ISL-LEX is an online interface built using the SIGN-LEX visualization (Caselli et al. 2022), and is intended for use by researchers, educators, and students. It is therefore offered in two text-based versions, English and Hebrew, with video instructions in ISL.

pdf bib
Towards Large Vocabulary Kazakh-Russian Sign Language Dataset: KRSL-OnlineSchool
Medet Mukushev | Aigerim Kydyrbekova | Vadim Kimmelman | Anara Sandygulova

This paper presents a new dataset for Kazakh-Russian Sign Language (KRSL) created for the purposes of Sign Language Processing. In 2020, Kazakhstan’s schools were quickly switched to online mode due to the COVID-19 pandemic. Every working day, the El-arna TV channel was broadcasting video lessons for grades from 1 to 11 with sign language translation. This opportunity allowed us to record a corpus with a large vocabulary and spontaneous SL interpretation. To this end, this corpus contains video recordings of Kazakhstan’s online school translated to Kazakh-Russian sign language by 7 interpreters. At the moment we collected and cleaned 890 hours of video material. A custom annotation tool was created to make the process of data annotation simple and easy-to-use by the Deaf community. To date, around 325 hours of videos have been annotated with glosses and 4,009 lessons out of 4,547 were transcribed with automatic speech-to-text software. The KRSL-OnlineSchool dataset will be made publicly available at https://krslproject.github.io/online-school/

pdf bib
Towards Semi-automatic Sign Language Annotation Tool: SLAN-tool
Medet Mukushev | Arman Sabyrov | Madina Sultanova | Vadim Kimmelman | Anara Sandygulova

This paper presents a semi-automatic annotation tool for sign languages namely SLAN-tool. The SLAN-tool provides a web-based service for the annotation of sign language videos. Researchers can use the SLAN-tool web service to annotate new and existing sign language datasets with different types of annotations, such as gloss, handshape configurations, and signing regions. This is allowed using a custom tier adding functionality. A unique feature of the tool is its automatic annotation functionality which uses several neural network models in order to recognize signing segments from videos and classify handshapes according to HamNoSys handshape inventory. Furthermore, SLAN-tool users can export annotations and import them into ELAN. The SLAN-tool is publicly available at https://slan-tool.com.

pdf bib
Resources for Computer-Based Sign Recognition from Video, and the Criticality of Consistency of Gloss Labeling across Multiple Large ASL Video Corpora
Carol Neidle | Augustine Opoku | Carey Ballard | Konstantinos M. Dafnis | Evgenia Chroni | Dimitri Metaxas

The WLASL purports to be “the largest video dataset for Word-Level American Sign Language (ASL) recognition.” It brings together various publicly shared video collections that could be quite valuable for sign recognition research, and it has been used extensively for such research. However, a critical problem with the accompanying annotations has heretofore not been recognized by the authors, nor by those who have exploited these data: There is no 1-1 correspondence between sign productions and gloss labels. Here we describe a large (and recently expanded and enhanced), linguistically annotated, downloadable, video corpus of citation-form ASL signs shared by the American Sign Language Linguistic Research Project (ASLLRP)—with 23,452 sign tokens and an online Sign Bank—in which such correspondences are enforced. We furthermore provide annotations for 19,672 of the WLASL video examples consistent with ASLLRP glossing conventions. For those wishing to use WLASL videos, this provides a set of annotations that makes it possible: (1) to use those data reliably for computational research; and/or (2) to combine the WLASL and ASLLRP datasets, creating a combined resource that is larger and richer than either of those datasets individually, with consistent gloss labeling for all signs. We also offer a summary of our own sign recognition research to date that exploits these data resources.

pdf bib
Signed Language Transcription and the Creation of a Cross-linguistic Comparative Database
Justin Power | David Quinto-Pozos | Danny Law

As the availability of signed language data has rapidly increased, sign scholars have been confronted with the challenge of creating a common framework for the cross-linguistic comparison of the phonological forms of signs. While transcription techniques have played a fundamental role in the creation of cross-linguistic comparative databases for spoken languages, transcription has featured much less prominently in sign research and lexicography. Here we report the experiences of the Sign Change project in using the signed language transcription system HamNoSys to create a comparative database of basic vocabulary for thirteen signed languages. We report the results of a small-scale study, in which we measured (i) the average time required for two trained transcribers to complete a transcription and (ii) the similarity of their independently produced transcriptions. We find that, across the two transcribers, the transcription of one sign required, on average, one minute and a half. We also find that the similarity of transcriptions differed across phonological parameters. We consider the implications of our findings about transcription time and transcription similarity for other projects that plan to incorporate transcription techniques.

pdf bib
Integrating Auslan Resources into the Language Data Commons of Australia
River Tae Smith | Louisa Willoughby | Trevor Johnston

This paper describes a project to secure Auslan (Australian Sign Language) resources within a national language data network called the Language Data Commons of Australia (LDaCA). The resources are Auslan Signbank, a web-based multi-media dictionary, and the Auslan Corpus, a collection of video recordings of the language being used in various contexts with time-aligned ELAN annotation files. We aim to make these resources accessible to the language community, encourage community participation in the curation of the data, and facilitate and extend their uses in language teaching and linguistic research. The software platforms of both resources will be made compatible with other LDaCA resources; and the two will also be aggregated and linked so that (i) users of the dictionary can view attested corpus examples for an entry; and (ii) users of the corpus can instantly view the dictionary entry for an already glossed sign to check phonological, lexical and grammatical information about it, and/or to ensure that the correct annotation gloss (aka ‘ID-gloss’) for a sign token has been chosen. This will enhance additions to annotations in the Auslan Corpus, entries in Auslan Signbank and the integrity of research based on both.

pdf bib
Capturing Distalization
Rose Stamp | Lilyana Khatib | Hagit Hel-Or

Coding and analyzing large amounts of video data is a challenge for sign language researchers, who traditionally code 2D video data manually. In recent years, the implementation of 3D motion capture technology as a means of automatically tracking movement in sign language data has been an important step forward. Several studies show that motion capture technologies can measure sign language movement parameters – such as volume, speed, variance – with high accuracy and objectivity. In this paper, using motion capture technology and machine learning, we attempt to automatically measure a more complex feature in sign language known as distalization. In general, distalized signs use the joints further from the torso (such as the wrist), however, the measure is relative and therefore distalization is not straightforward to measure. The development of a reliable and automatic measure of distalization using motion tracking technology is of special interest in many fields of sign language research.

pdf bib
The Corpus of Israeli Sign Language
Rose Stamp | Ora Ohanin | Sara Lanesman

The Corpus of Israeli Sign Language is a four-year project (2020-2024) which aims to create a digital open-access corpus of spontaneous and elicited data from a representative sample of the Israeli deaf community. In this paper, the methodology for building the Corpus of Israeli Sign Language is described. Israeli Sign Language (ISL) is the main sign language used across Israel by around 10,000 people. As part of the corpus, data will be collected from 120 deaf ISL signers across four sites in Israel: Tel Aviv and the Centre, Haifa and the North, Be’er Sheva and the South and Jerusalem and the surrounding area. Participants will engage in a variety of tasks, eliciting a range of signing styles from free conversation to lexical elicitation. The dataset will consist of recordings of over 360 hours of video data which will be used to conduct sociolinguistic investigations of language contact, variation, and change in the near term, and other linguistic analyses in the future.

pdf bib
Segmentation of Signs for Research Purposes: Comparing Humans and Machines
Bencie Woll | Neil Fox | Kearsy Cormier

Sign languages such as British Sign Language (BSL) are visual languages which lack standard writing systems. Annotation of sign language data, especially for the purposes of machine readability, is therefore extremely slow. Tools to help automate and thus speed up the annotation process are very much needed. Here we test the development of one such tool (VIA-SLA), which uses temporal convolutional networks (Renz et al., 2021a, b) for the purpose of segmenting continuous signing in any sign language, and is designed to integrate smoothly with ELAN, the widely used annotation software for analysis of videos of sign language. We compare automatic segmentation by machine with segmentation done by a human, both in terms of time needed and accuracy of segmentation, using samples taken from the BSL Corpus (Schembri et al., 2014). A small sample of four short video files is tested (mean duration 25 seconds). We find that mean accuracy in terms of number and location of segmentations is relatively high, at around 78%. This preliminary test suggests that VIA-SLA promises to be very useful for sign linguists.

pdf bib
Sign Language Video Anonymization
Zhaoyang Xia | Yuxiao Chen | Qilong Zhangli | Matt Huenerfauth | Carol Neidle | Dimitri Metaxas

Deaf signers who wish to communicate in their native language frequently share videos on the Web. However, videos cannot preserve privacy—as is often desirable for discussion of sensitive topics—since both hands and face convey critical linguistic information and therefore cannot be obscured without degrading communication. Deaf signers have expressed interest in video anonymization that would preserve linguistic content. However, attempts to develop such technology have thus far shown limited success. We are developing a new method for such anonymization, with input from ASL signers. We modify a motion-based image animation model to generate high-resolution videos with the signer identity changed, but with the preservation of linguistically significant motions and facial expressions. An asymmetric encoder-decoder structured image generator is used to generate the high-resolution target frame from the low-resolution source frame based on the optical flow and confidence map. We explicitly guide the model to attain a clear generation of hands and faces by using bounding boxes to improve the loss computation. FID and KID scores are used for the evaluation of the realism of the generated frames. This technology shows great potential for practical applications to benefit deaf signers.