Julia Krebs
2024
Motion Capture Analysis of Verb and Adjective Types in Austrian Sign Language (ÖGS)
Julia Krebs
|
Evguenia A. Malaia
|
Isabella Fessl
|
Hans-Peter Wiesinger
|
Dietmar Roehm
|
Ronnie Wilbur
|
Hermann Schwameder
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Across a number of sign languages, temporal and spatial characteristics of dominant hand articulation are used to express semantic and grammatical features. In this study of Austrian Sign Language (Österreichische Gebärdensprache, or ÖGS), motion capture data of four Deaf signers is used to quantitatively characterize the kinematic parameters of sign production in verbs and adjectives. We investigate (1) the difference in production between verbs involving a natural endpoint (telic verbs; e.g. arrive) and verbs lacking an endpoint (atelic verbs; e.g. analyze), and (2) adjective signs in intensified vs. non-intensified (plain) forms. Motion capture data analysis using linear-mixed effects models (LME) indicates that both the endpoint marking in verbs, as well as marking of intensification in adjectives, are expressed by movement modulation in ÖGS. While the semantic distinction between verb types (telic/atelic) is marked by higher peak velocity and shorter duration for telic signs compared to atelic ones, the grammatical distinction (intensification) in adjectives is expressed by longer duration for intensified compared to non-intensified adjectives. The observed individual differences of signers might be interpreted as personal signing style.
2022
Felix&Julia at SemEval-2022 Task 4: Patronizing and Condescending Language Detection
Felix Herrmann
|
Julia Krebs
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes the authors’ submission to the SemEval-2022 task 4: Patronizing and Condescending Language (PCL) Detection. The aim of the task is the detection and classification of PCL in an annotated dataset. Subtask 1 includes a binary classification task (PCL or not PCL). Subtask 2 is a multi label classification task where the system identifies different categories of PCL. The authors of this paper submitted two different models: one RoBERTa model and one DistilBERT model. Both systems performed better than the random and RoBERTA baseline given by the task organizers. The RoBERTA model finetuned by the authors performed better in both subtasks than the DistilBERT model.
Search
Co-authors
- Felix Herrmann 1
- Evguenia A. Malaia 1
- Isabella Fessl 1
- Hans-Peter Wiesinger 1
- Dietmar Roehm 1
- show all...