Bencie Woll
2022
Segmentation of Signs for Research Purposes: Comparing Humans and Machines
Bencie Woll
|
Neil Fox
|
Kearsy Cormier
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources
Sign languages such as British Sign Language (BSL) are visual languages which lack standard writing systems. Annotation of sign language data, especially for the purposes of machine readability, is therefore extremely slow. Tools to help automate and thus speed up the annotation process are very much needed. Here we test the development of one such tool (VIA-SLA), which uses temporal convolutional networks (Renz et al., 2021a, b) for the purpose of segmenting continuous signing in any sign language, and is designed to integrate smoothly with ELAN, the widely used annotation software for analysis of videos of sign language. We compare automatic segmentation by machine with segmentation done by a human, both in terms of time needed and accuracy of segmentation, using samples taken from the BSL Corpus (Schembri et al., 2014). A small sample of four short video files is tested (mean duration 25 seconds). We find that mean accuracy in terms of number and location of segmentations is relatively high, at around 78%. This preliminary test suggests that VIA-SLA promises to be very useful for sign linguists.
2020
Machine Learning for Enhancing Dementia Screening in Ageing Deaf Signers of British Sign Language
Xing Liang
|
Bencie Woll
|
Kapetanios Epaminondas
|
Anastasia Angelopoulou
|
Reda Al-Batat
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives
Ageing trend in populations is correlated with increased prevalence of acquired cognitive impairments such as dementia. Although there is no cure for dementia, a timely diagnosis helps in obtaining necessary support and appropriate medication. With this in mind, researchers are working urgently to develop effective technological tools that can help doctors undertake early identification of cognitive disorder. In this paper, we introduce an automatic dementia screening system for ageing Deaf signers of British Sign Language (BSL), using Convolutional Neural Networks (CNN), by analysing the sign space envelope and facial expression of BSL signers using normal 2D videos from BSL corpus. Our approach firstly establishes an accurate real-time hand trajectory tracking model together with a real-time landmark facial motion analysis model to identify differences in sign space envelope and facial movement as the keys to identifying language changes associated with dementia. Based on the differences in patterns obtained from facial and trajectory motion data, CNN models (ResNet50/VGG16) are fine-tuned using Keras deep learning models to incrementally identify and improve dementia recognition rates. We report the results for two methods using different modalities (sign trajectory and facial motion), together with the performance comparisons between different deep learning CNN models in ResNet50 and VGG16. The experiments show the effectiveness of our deep learning based approach in terms of sign space tracking, facial motion tracking and early stage dementia performance assessment tasks. The results are validated against cognitive assessment scores as of our ground truth data with a test set performance of 87.88%. The proposed system has potential for economical, simple, flexible, and adaptable assessment of other acquired neurological impairments associated with motor changes, such as stroke and Parkinson’s disease in both hearing and Deaf people.
Search
Co-authors
- Xing Liang 1
- Kapetanios Epaminondas 1
- Anastasia Angelopoulou 1
- Reda Al-Batat 1
- Neil Fox 1
- show all...