Vadim Kimmelman


2024

pdf bib
Headshakes in NGT: Relation between Phonetic Properties & Linguistic Functions
Vadim Kimmelman | Marloes Oomen | Roland Pfau
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources

pdf bib
Nonmanual Marking of Questions in Balinese Homesign Interactions: a Computer-Vision Assisted Analysis
Vadim Kimmelman | Ari Price | Josefina Safar | Connie de Vos | Jan Bulla
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources

pdf bib
Eye Blink Detection in Sign Language Data Using CNNs and Rule-Based Methods
Margaux Susman | Vadim Kimmelman
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources

2022

pdf bib
Crowdsourcing Kazakh-Russian Sign Language: FluentSigners-50
Medet Mukushev | Aigerim Kydyrbekova | Alfarabi Imashev | Vadim Kimmelman | Anara Sandygulova
Proceedings of the Thirteenth Language Resources and Evaluation Conference

This paper presents the methodology we used to crowdsource a data collection of a new large-scale signer independent dataset for Kazakh-Russian Sign Language (KRSL) created for Sign Language Processing. By involving the Deaf community throughout the research process, we firstly designed a research protocol and then performed an efficient crowdsourcing campaign that resulted in a new FluentSigners-50 dataset. The FluentSigners-50 dataset consists of 173 sentences performed by 50 KRSL signers for 43,250 video samples. Dataset contributors recorded videos in real-life settings on various backgrounds using various devices such as smartphones and web cameras. Therefore, each dataset contribution has a varying distance to the camera, camera angles and aspect ratio, video quality, and frame rates. Additionally, the proposed dataset contains a high degree of linguistic and inter-signer variability and thus is a better training set for recognizing a real-life signed speech. FluentSigners-50 is publicly available at https://krslproject.github.io/fluentsigners-50/

pdf bib
Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study
Anastasia Chizhikova | Vadim Kimmelman
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources

We analyzed negative headshake found in the online corpus of Russian Sign Language. We found that negative headshake can co-occur with negative manual signs, although most of these signs are not accompanied by it. We applied OpenFace, a Computer Vision toolkit, to extract head rotation measurements from video recordings, and analyzed the headshake in terms of the number of peaks (turns), the amplitude of the turns, and their frequency. We find that such basic phonetic measurements of headshake can be extracted using a combination of manual annotation and Computer Vision, and can be further used in comparative research across constructions and sign languages.

pdf bib
Functional Data Analysis of Non-manual Marking of Questions in Kazakh-Russian Sign Language
Anna Kuznetsova | Alfarabi Imashev | Medet Mukushev | Anara Sandygulova | Vadim Kimmelman
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources

This paper is a continuation of Kuznetsova et al. (2021), which described non-manual markers of polar and wh-questions in comparison with statements in an NLP dataset of Kazakh-Russian Sign Language (KRSL) using Computer Vision. One of the limitations of the previous work was the distortion of the 3D face landmarks when the head was rotated. The proposed solution was to train a simple linear regression model to predict the distortion and then subtract it from the original output. We improve this technique with a multilayer perceptron. Another limitation that we intend to address in this paper is the discrete analysis of the continuous movement of non-manuals. In Kuznetsova et al. (2021) we averaged the value of the non-manual over its scope for statistical analysis. To preserve information on the shape of the movement, in this study we use a statistical tool that is often used in speech research, Functional Data Analysis, specifically Functional PCA.

pdf bib
Towards Large Vocabulary Kazakh-Russian Sign Language Dataset: KRSL-OnlineSchool
Medet Mukushev | Aigerim Kydyrbekova | Vadim Kimmelman | Anara Sandygulova
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources

This paper presents a new dataset for Kazakh-Russian Sign Language (KRSL) created for the purposes of Sign Language Processing. In 2020, Kazakhstan’s schools were quickly switched to online mode due to the COVID-19 pandemic. Every working day, the El-arna TV channel was broadcasting video lessons for grades from 1 to 11 with sign language translation. This opportunity allowed us to record a corpus with a large vocabulary and spontaneous SL interpretation. To this end, this corpus contains video recordings of Kazakhstan’s online school translated to Kazakh-Russian sign language by 7 interpreters. At the moment we collected and cleaned 890 hours of video material. A custom annotation tool was created to make the process of data annotation simple and easy-to-use by the Deaf community. To date, around 325 hours of videos have been annotated with glosses and 4,009 lessons out of 4,547 were transcribed with automatic speech-to-text software. The KRSL-OnlineSchool dataset will be made publicly available at https://krslproject.github.io/online-school/

pdf bib
Towards Semi-automatic Sign Language Annotation Tool: SLAN-tool
Medet Mukushev | Arman Sabyrov | Madina Sultanova | Vadim Kimmelman | Anara Sandygulova
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources

This paper presents a semi-automatic annotation tool for sign languages namely SLAN-tool. The SLAN-tool provides a web-based service for the annotation of sign language videos. Researchers can use the SLAN-tool web service to annotate new and existing sign language datasets with different types of annotations, such as gloss, handshape configurations, and signing regions. This is allowed using a custom tier adding functionality. A unique feature of the tool is its automatic annotation functionality which uses several neural network models in order to recognize signing segments from videos and classify handshapes according to HamNoSys handshape inventory. Furthermore, SLAN-tool users can export annotations and import them into ELAN. The SLAN-tool is publicly available at https://slan-tool.com.

2021

pdf bib
Using Computer Vision to Analyze Non-manual Marking of Questions in KRSL
Anna Kuznetsova | Alfarabi Imashev | Medet Mukushev | Anara Sandygulova | Vadim Kimmelman
Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL)

This paper presents a study that compares non-manual markers of polar and wh-questions to statements in Kazakh-Russian Sign Language (KRSL) in a dataset collected for NLP tasks. The primary focus of the study is to demonstrate the utility of computer vision solutions for the linguistic analysis of non-manuals in sign languages, although additional corrections are required to account for biases in the output. To this end, we analyzed recordings of 10 triplets of sentences produced by 9 native signers using both manual annotation and computer vision solutions (such as OpenFace). We utilize and improve the computer vision solution, and briefly describe the results of the linguistic analysis.

2020

pdf bib
A Dataset for Linguistic Understanding, Visual Evaluation, and Recognition of Sign Languages: The K-RSL
Alfarabi Imashev | Medet Mukushev | Vadim Kimmelman | Anara Sandygulova
Proceedings of the 24th Conference on Computational Natural Language Learning

The paper presents the first dataset that aims to serve interdisciplinary purposes for the utility of computer vision community and sign language linguistics. To date, a majority of Sign Language Recognition (SLR) approaches focus on recognising sign language as a manual gesture recognition problem. However, signers use other articulators: facial expressions, head and body position and movement to convey linguistic information. Given the important role of non-manual markers, this paper proposes a dataset and presents a use case to stress the importance of including non-manual features to improve the recognition accuracy of signs. To the best of our knowledge no prior publicly available dataset exists that explicitly focuses on non-manual components responsible for the grammar of sign languages. To this end, the proposed dataset contains 28250 videos of signs of high resolution and quality, with annotation of manual and non-manual components. We conducted a series of evaluations in order to investigate whether non-manual components would improve signs’ recognition accuracy. We release the dataset to encourage SLR researchers and help advance current progress in this area toward real-time sign language interpretation. Our dataset will be made publicly available at https://krslproject.github.io/krsl-corpus

pdf bib
Evaluation of Manual and Non-manual Components for Sign Language Recognition
Medet Mukushev | Arman Sabyrov | Alfarabi Imashev | Kenessary Koishybay | Vadim Kimmelman | Anara Sandygulova
Proceedings of the Twelfth Language Resources and Evaluation Conference

The motivation behind this work lies in the need to differentiate between similar signs that differ in non-manual components present in any sign. To this end, we recorded full sentences signed by five native signers and extracted 5200 isolated sign samples of twenty frequently used signs in Kazakh-Russian Sign Language (K-RSL), which have similar manual components but differ in non-manual components (i.e. facial expressions, eyebrow height, mouth, and head orientation). We conducted a series of evaluations in order to investigate whether non-manual components would improve sign’s recognition accuracy. Among standard machine learning approaches, Logistic Regression produced the best results, 78.2% of accuracy for dataset with 20 signs and 77.9% of accuracy for dataset with 2 classes (statement vs question). Dataset can be downloaded from the following website: https://krslproject.github.io/krsl20/

pdf bib
Automatic Classification of Handshapes in Russian Sign Language
Medet Mukushev | Alfarabi Imashev | Vadim Kimmelman | Anara Sandygulova
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives

Handshapes are one of the basic parameters of signs, and any phonological or phonetic analysis of a sign language must account for handshapes. Many sign languages have been carefully analysed by sign language linguists to create handshape inventories. This has theoretical implications, but also applied use, as it is important due to the need of generating corpora for sign languages that can be searched, filtered, sorted by different sign components (such as handshapes, orientation, location, movement, etc.). However, it is a very time-consuming process, thus only a handful of sign languages have such inventories. This work proposes a process of automatically generating such inventories for sign languages by applying automatic hand detection, cropping, and clustering techniques. We applied our proposed method to a commonly used resource: the Spreadthesign online dictionary (www.spreadthesign.com), in particular to Russian Sign Language (RSL). We then manually verified the data to be able to perform classification. Thus, the proposed pipeline can serve as an alternative approach to manual annotation, and can help linguists in answering numerous research questions in relation to handshape frequencies in sign languages.

2018

pdf bib
IPSL: A Database of Iconicity Patterns in Sign Languages. Creation and Use
Vadim Kimmelman | Anna Klezovich | George Moroz
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)