2022
pdf
bib
abs
A Dataset of Word-Complexity Judgements from Deaf and Hard-of-Hearing Adults for Text Simplification
Oliver Alonzo
|
Sooyeon Lee
|
Mounica Maddela
|
Wei Xu
|
Matt Huenerfauth
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
Research has explored the use of automatic text simplification (ATS), which consists of techniques to make text simpler to read, to provide reading assistance to Deaf and Hard-of-hearing (DHH) adults with various literacy levels. Prior work in this area has identified interest in and benefits from ATS-based reading assistance tools. However, no prior work on ATS has gathered judgements from DHH adults as to what constitutes complex text. Thus, following approaches in prior NLP work, this paper contributes new word-complexity judgements from 11 DHH adults on a dataset of 15,000 English words that had been previously annotated by L2 speakers, which we also augmented to include automatic annotations of linguistic characteristics of the words. Additionally, we conduct a supplementary analysis of the interaction effect between the linguistic characteristics of the words and the groups of annotators. This analysis highlights the importance of collecting judgements from DHH adults for training ATS systems, as it revealed statistically significant interaction effects for nearly all of the linguistic characteristics of the words.
pdf
bib
abs
Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users
Akhter Al Amin
|
Saad Hassan
|
Cecilia Alm
|
Matt Huenerfauth
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labelled word-importance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.
pdf
bib
abs
ASL-Homework-RGBD Dataset: An Annotated Dataset of 45 Fluent and Non-fluent Signers Performing American Sign Language Homeworks
Saad Hassan
|
Matthew Seita
|
Larwan Berke
|
Yingli Tian
|
Elaine Gale
|
Sooyeon Lee
|
Matt Huenerfauth
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources
We are releasing a dataset containing videos of both fluent and non-fluent signers using American Sign Language (ASL), which were collected using a Kinect v2 sensor. This dataset was collected as a part of a project to develop and evaluate computer vision algorithms to support new technologies for automatic detection of ASL fluency attributes. A total of 45 fluent and non-fluent participants were asked to perform signing homework assignments that are similar to the assignments used in introductory or intermediate level ASL courses. The data is annotated to identify several aspects of signing including grammatical features and non-manual markers. Sign language recognition is currently very data-driven and this dataset can support the design of recognition technologies, especially technologies that can benefit ASL learners. This dataset might also be interesting to ASL education researchers who want to contrast fluent and non-fluent signing.
pdf
bib
abs
Sign Language Video Anonymization
Zhaoyang Xia
|
Yuxiao Chen
|
Qilong Zhangli
|
Matt Huenerfauth
|
Carol Neidle
|
Dimitri Metaxas
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources
Deaf signers who wish to communicate in their native language frequently share videos on the Web. However, videos cannot preserve privacy—as is often desirable for discussion of sensitive topics—since both hands and face convey critical linguistic information and therefore cannot be obscured without degrading communication. Deaf signers have expressed interest in video anonymization that would preserve linguistic content. However, attempts to develop such technology have thus far shown limited success. We are developing a new method for such anonymization, with input from ASL signers. We modify a motion-based image animation model to generate high-resolution videos with the signer identity changed, but with the preservation of linguistically significant motions and facial expressions. An asymmetric encoder-decoder structured image generator is used to generate the high-resolution target frame from the low-resolution source frame based on the optical flow and confidence map. We explicitly guide the model to attain a clear generation of hands and faces by using bounding boxes to improve the loss computation. FID and KID scores are used for the evaluation of the realism of the generated frames. This technology shows great potential for practical applications to benefit deaf signers.
2021
pdf
bib
abs
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens
Saad Hassan
|
Matt Huenerfauth
|
Cecilia Ovesdotter Alm
Findings of the Association for Computational Linguistics: EMNLP 2021
Much of the world’s population experiences some form of disability during their lifetime. Caution must be exercised while designing natural language processing (NLP) systems to prevent systems from inadvertently perpetuating ableist bias against people with disabilities, i.e., prejudice that favors those with typical abilities. We report on various analyses based on word predictions of a large-scale BERT language model. Statistically significant results demonstrate that people with disabilities can be disadvantaged. Findings also explore overlapping forms of discrimination related to interconnected gender and race identities.
2020
pdf
bib
abs
An Isolated-Signing RGBD Dataset of 100 American Sign Language Signs Produced by Fluent ASL Signers
Saad Hassan
|
Larwan Berke
|
Elahe Vahdani
|
Longlong Jing
|
Yingli Tian
|
Matt Huenerfauth
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives
We have collected a new dataset consisting of color and depth videos of fluent American Sign Language (ASL) signers performing sequences of 100 ASL signs from a Kinect v2 sensor. This directed dataset had originally been collected as part of an ongoing collaborative project, to aid in the development of a sign-recognition system for identifying occurrences of these 100 signs in video. The set of words consist of vocabulary items that would commonly be learned in a first-year ASL course offered at a university, although the specific set of signs selected for inclusion in the dataset had been motivated by project-related factors. Given increasing interest among sign-recognition and other computer-vision researchers in red-green-blue-depth (RBGD) video, we release this dataset for use by the research community. In addition to the RGB video files, we share depth and HD face data as well as additional features of face, hands, and body produced through post-processing of this data.
2019
pdf
bib
abs
Modeling Acoustic-Prosodic Cues for Word Importance Prediction in Spoken Dialogues
Sushant Kafle
|
Cissi Ovesdotter Alm
|
Matt Huenerfauth
Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies
Prosodic cues in conversational speech aid listeners in discerning a message. We investigate whether acoustic cues in spoken dialogue can be used to identify the importance of individual words to the meaning of a conversation turn. Individuals who are Deaf and Hard of Hearing often rely on real-time captions in live meetings. Word error rate, a traditional metric for evaluating automatic speech recognition (ASR), fails to capture that some words are more important for a system to transcribe correctly than others. We present and evaluate neural architectures that use acoustic features for 3-class word importance prediction. Our model performs competitively against state-of-the-art text-based word-importance prediction models, and it demonstrates particular benefits when operating on imperfect ASR output.
2018
pdf
bib
A Corpus for Modeling Word Importance in Spoken Dialogue Transcripts
Sushant Kafle
|
Matt Huenerfauth
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
2016
pdf
bib
Continuous Profile Models in ASL Syntactic Facial Expression Synthesis
Hernisa Kacorri
|
Matt Huenerfauth
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2015
pdf
bib
Bridging the gap between sign language machine translation and sign language animation using sequence classification
Sarah Ebling
|
Matt Huenerfauth
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies
pdf
bib
Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data
Matt Huenerfauth
|
Pengfei Lu
|
Hernisa Kacorri
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies
pdf
bib
Evaluating a Dynamic Time Warping Based Scoring Algorithm for Facial Expressions in ASL Animations
Hernisa Kacorri
|
Matt Huenerfauth
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies
2012
pdf
bib
Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data
Pengfei Lu
|
Matt Huenerfauth
Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies
2010
pdf
bib
Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research
Pengfei Lu
|
Matt Huenerfauth
Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
pdf
bib
A Comparison of Features for Automatic Readability Assessment
Lijun Feng
|
Martin Jansche
|
Matt Huenerfauth
|
Noémie Elhadad
Coling 2010: Posters
2009
pdf
bib
Cognitively Motivated Features for Readability Assessment
Lijun Feng
|
Noémie Elhadad
|
Matt Huenerfauth
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)
2007
pdf
bib
Design and Evaluation of an American Sign Language Generator
Matt Huenerfauth
|
Liming Zhou
|
Erdan Gu
|
Jan Allbeck
Proceedings of the Workshop on Embodied Language Processing
2006
pdf
bib
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Doctoral Consortium
Matt Huenerfauth
|
Bo Pang
|
Mitch Marcus
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Doctoral Consortium
2005
pdf
bib
American Sign Language Generation: Multimodal NLG with Multiple Linguistic Channels
Matt Huenerfauth
Proceedings of the ACL Student Research Workshop
2004
pdf
bib
A Multi-Path Architecture for Machine Translation of English Text into American Sign Language Animation
Matt Huenerfauth
Proceedings of the Student Research Workshop at HLT-NAACL 2004
pdf
bib
Spatial and planning models of ASL classifier predicates for machine translation
Matt Huenerfauth
Proceedings of the 10th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages