2024
pdf
bib
abs
Extension d’AZee avec des règles de production concernant les gestes non-manuels pour la langue des signes française
Camille Challant
|
Michael Filhol
Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 1 : articles longs et prises de position
Cet article présente une étude sur les gestes non-manuels (GNM) en utilisant AZee, une approche qui permet de représenter formellement des discours en langue des signes (LS) et de les animer avec un signeur virtuel. Les GNM étant essentiels en LS et donc nécessaires à une synthèse de qualité,notre objectif est d’augmenter l’ensemble de règles de production AZee avec des règles concernant les GNM. Pour cela, nous avons appliqué la méthodologie permettant de trouver de nouvelles règles de production sur un corpus de langue des signes française, 40 brèves. 23 règles concernant les GNM ont été identifiées. Nous avons profité de cette étude pour insérer ces règles dans le premier corpus d’expressions AZee, qui décrivent avec AZee les productions en LS du corpus 40 brèves. Notre étude donne lieu a une nouvelle version du corpus d’expressions AZee, qui comporte 533 occurrences de règles relatives aux GNM.
pdf
bib
Facial Expressions for Sign Language Synthesis using FACSHuman and AZee
Paritosh Sharma
|
Camille Challant
|
Michael Filhol
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources
pdf
bib
abs
Extending AZee with Non-manual Gesture Rules for French Sign Language
Camille Challant
|
Michael Filhol
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
This paper presents a study on non-manual gestures, using a formal model named AZee. This is an approach which allows to formally represent Sign Language (SL) discourses, but also to animate them with a virtual signer. As non-manual gestures are essential in SL and therefore necessary for a quality synthesis, we wanted to extend AZee with them, by adding some production rules to the AZee production set. For this purpose, we applied a methodology which allows to find new production rules on a corpus representing one hour of French Sign Language, the 40 brèves (Challant and Filhol, 2022). 23 production rules for non-manual gestures in LSF have thus been determined. We took advantage of this study to directly insert these new rules in the first corpus of AZee discourses expressions, which describe with AZee the productions in SL of the 40 brèves corpus. 533 non-manual rules were inserted in the corpus, and some updates were made. This article proposes a new version of this AZee expressions corpus.
2023
pdf
bib
abs
Une grammaire formelle pour les langues des signes basée sur AZee : une proposition établie sur une étude de corpus
Camille Challant
|
Michael Filhol
Actes de CORIA-TALN 2023. Actes de la 30e Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 2 : travaux de recherche originaux -- articles courts
Cet article propose de premières réflexions quant à l’élaboration d’une grammaire formelle pour les langues des signes, basée sur l’approche AZee. Nous avons mené une étude statistique sur un corpus d’expressions AZee, qui décrivent des discours en langue des signes française. Cela nous permet d’entrevoir des contraintes sur ces expressions, qui reflètent plus généralement les contraintes de la langue des signes française. Nous présentons quelques contraintes et positionnons théoriquement notre ébauche de grammaire au sein des différentes grammaires formelles existantes.
pdf
bib
abs
Traduction à base d’exemples du texte vers une représentation hiérarchique de la langue des signes
Elise Bertin-Lemée
|
Annelies Braffort
|
Camille Challant
|
Claire Danet
|
Michael Filhol
Actes de CORIA-TALN 2023. Actes de la 30e Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 4 : articles déjà soumis ou acceptés en conférence internationale
Cet article présente une expérimentation de traduction automatique de texte vers la langue des signes (LS). Comme nous ne disposons pas de corpus aligné de grande taille, nous avons exploré une approche à base d’exemples, utilisant AZee, une représentation intermédiaire du discours en LS sous la forme d’expressions hiérarchisées
pdf
bib
abs
Example-Based Machine Translation from Textto a Hierarchical Representation of Sign Language
Elise Bertin-Lemée
|
Annelies Braffort
|
Camille Challant
|
Claire Danet
|
Michael Filhol
Proceedings of the 24th Annual Conference of the European Association for Machine Translation
This article presents an original method for Text-to-Sign Translation. It compensates data scarcity using a domain-specific parallel corpus of alignments between text and hierarchical formal descriptions of Sign Language videos. Based on the detection of similarities present in the source text, the proposed algorithm recursively exploits matches and substitutions of aligned segments to build multiple candidate translations for a novel statement. This helps preserving Sign Language structures as much as possible before falling back on literal translations too quickly, in a generative way. The resulting translations are in the form of AZee expressions, designed to be used as input to avatar synthesis systems. We present a test set tailored to showcase its potential for expressiveness and generation of idiomatic target language, and observed limitations. This work finally opens prospects on how to evaluate this kind of translation.
2022
pdf
bib
abs
A First Corpus of AZee Discourse Expressions
Camille Challant
|
Michael Filhol
Proceedings of the Thirteenth Language Resources and Evaluation Conference
This paper presents a corpus of AZee discourse expressions, i.e. expressions which formally describe Sign Language utterances of any length using the AZee approach and language. The construction of this corpus had two main goals: a first reference corpus for AZee, and a test of its coverage on a significant sample of real-life utterances. We worked on productions from an existing corpus, namely the “40 breves”, containing an hour of French Sign Language. We wrote the corresponding AZee discourse expressions for the entire video content, i.e. expressions capturing the forms produced by the signers and their associated meaning by combining known production rules, a basic building block for these expressions. These are made available as a version 2 extension of the “40 breves”. We explain the way in which these expressions can be built, present the resulting corpus and set of production rules used, and perform first measurements on it. We also propose an evaluation of our corpus: for one hour of discourse, AZee allows to describe 94% of it, while ongoing studies are increasing this coverage. This corpus offers a lot of future prospects, for instance concerning synthesis with virtual signers, machine translation or formal grammars for Sign Language.
pdf
bib
abs
Rosetta-LSF: an Aligned Corpus of French Sign Language and French for Text-to-Sign Translation
Elise Bertin-Lemée
|
Annelies Braffort
|
Camille Challant
|
Claire Danet
|
Boris Dauriac
|
Michael Filhol
|
Emmanuella Martinod
|
Jérémie Segouat
Proceedings of the Thirteenth Language Resources and Evaluation Conference
This article presents a new French Sign Language (LSF) corpus called “Rosetta-LSF”. It was created to support future studies on the automatic translation of written French into LSF, rendered through the animation of a virtual signer. An overview of the field highlights the importance of a quality representation of LSF. In order to obtain quality animations understandable by signers, it must surpass the simple “gloss transcription” of the LSF lexical units to use in the discourse. To achieve this, we designed a corpus composed of four types of aligned data, and evaluated its usability. These are: news headlines in French, translations of these headlines into LSF in the form of videos showing animations of a virtual signer, gloss annotations of the “traditional” type—although including additional information on the context in which each gestural unit is performed as well as their potential for adaptation to another context—and AZee representations of the videos, i.e. formal expressions capturing the necessary and sufficient linguistic information. This article describes this data, exhibiting an example from the corpus. It is available online for public research.