Antoine Laurent


2024

pdf bib
Évaluation perceptive de l’anticipation de la prise de parole lors d’interactions dialogiques en français
Rémi Uro | Albert Rilliard | David Doukhan | Marie Tahon | Antoine Laurent
Actes des 35èmes Journées d'Études sur la Parole

Cette étude présente un test perceptif évaluant les indices permettant la planification de la prise de parole lors d’interactions orales spontanées. Des Unités Inter-Pauses (IPU) ont été extraites de dialogues du corpus REPERE et annotées en terminalité. Afin de déterminer quels paramètres affectent les jugements de la possibilité de prendre la parole, les stimulus ont été présentés sous forme audio ou textuelle.Les participant·es devaient indiquer la possibilité de prendre la parole «~Maintenant~», «~Bientôt~» ou «~Pas encore~», à la fin des IPU tronqués de 0 à 3 mots prosodiques. Les participant·es sont moins susceptibles de prendre la parole pour les frontières non terminales en modalité audio que textuelle. La modalité audio permet également d’anticiper une fin de tour de parole au moins trois mots avant sa fin, tandis que la modalité textuelle permet moins d’anticipation. Ces résultats soutiennent l’importance des indices contenus dans la parole pour la planification des interactions dialogiques.

pdf bib
Annotation of Transition-Relevance Places and Interruptions for the Description of Turn-Taking in Conversations in French Media Content
Rémi Uro | Marie Tahon | Jane Wottawa | David Doukhan | Albert Rilliard | Antoine Laurent
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Few speech resources describe interruption phenomena, especially for TV and media content. The description of these phenomena may vary across authors: it thus leaves room for improved annotation protocols. We present an annotation of Transition-Relevance Places (TRP) and Floor-Taking event types on an existing French TV and Radio broadcast corpus to facilitate studies of interruptions and turn-taking. Each speaker change is annotated with the presence or absence of a TRP, and a classification of the next-speaker floor-taking as Smooth, Backchannel or different types of turn violations (cooperative or competitive, successful or attempted interruption). An inter-rater agreement analysis shows such annotations’ moderate to substantial reliability. The inter-annotator agreement for TRP annotation reaches κ=0.75, κ=0.56 for Backchannel and κ=0.5 for the Interruption/non-interruption distinction. More precise differences linked to cooperative or competitive behaviors lead to lower agreements. These results underline the importance of low-level features like TRP to derive a classification of turn changes that would be less subject to interpretation. The analysis of the presence of overlapping speech highlights the existence of interruptions without overlaps and smooth transitions with overlaps. These annotations are available at https://lium.univ-lemans.fr/corpus-allies/.

pdf bib
Automatic Speech Interruption Detection: Analysis, Corpus, and System
Martin Lebourdais | Marie Tahon | Antoine Laurent | Sylvain Meignier
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Interruption detection is a new yet challenging task in the field of speech processing. This article presents a comprehensive study on automatic speech interruption detection, from the definition of this task, the assembly of a specialized corpus, and the development of an initial baseline system. We provide three main contributions: Firstly, we define the task, taking into account the nuanced nature of interruptions within spontaneous conversations. Secondly, we introduce a new corpus of conversational data, annotated for interruptions, to facilitate research in this domain. This corpus serves as a valuable resource for evaluating and advancing interruption detection techniques. Lastly, we present a first baseline system, which use speech processing methods to automatically identify interruptions in speech with promising results. In this article, we derivate from theoretical notions of interruption to build a simplification of this notion based on overlapped speech detection. Our findings can not only serve as a foundation for further research in the field but also provide a benchmark for assessing future advancements in automatic speech interruption detection.

2023

pdf bib
ON-TRAC Consortium Systems for the IWSLT 2023 Dialectal and Low-resource Speech Translation Tasks
Antoine Laurent | Souhir Gahbiche | Ha Nguyen | Haroun Elleuch | Fethi Bougares | Antoine Thiol | Hugo Riguidel | Salima Mdhaffar | Gaëlle Laperrière | Lucas Maison | Sameer Khurana | Yannick Estève
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)

This paper describes the ON-TRAC consortium speech translation systems developed for IWSLT 2023 evaluation campaign. Overall, we participated in three speech translation tracks featured in the low-resource and dialect speech translation shared tasks, namely; i) spoken Tamasheq to written French, ii) spoken Pashto to written French, and iii) spoken Tunisian to written English. All our primary submissions are based on the end-to-end speech-to-text neural architecture using a pretrained SAMU-XLSR model as a speech encoder and a mbart model as a decoder. The SAMU-XLSR model is built from the XLS-R 128 in order to generate language agnostic sentence-level embeddings. This building is driven by the LaBSE model trained on multilingual text dataset. This architecture allows us to improve the input speech representations and achieve significant improvements compared to conventional end-to-end speech translation systems.

2022

pdf bib
Overlaps and Gender Analysis in the Context of Broadcast Media
Martin Lebourdais | Marie Tahon | Antoine Laurent | Sylvain Meignier | Anthony Larcher
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Our main goal is to study the interactions between speakers according to their gender and role in broadcast media. In this paper, we propose an extensive study of gender and overlap annotations in various speech corpora mainly dedicated to diarisation or transcription tasks. We point out the issue of the heterogeneity of the annotation guidelines for both overlapping speech and gender categories. On top of that, we analyse how the speech content (casual speech, meetings, debate, interviews, etc.) impacts the distribution of overlapping speech segments. On a small dataset of 93 recordings from LCP French channel, we intend to characterise the interactions between speakers according to their gender. Finally, we propose a method which aims to highlight active speech areas in terms of interactions between speakers. Such a visualisation tool could improve the efficiency of qualitative studies conducted by researchers in human sciences.

pdf bib
A Semi-Automatic Approach to Create Large Gender- and Age-Balanced Speaker Corpora: Usefulness of Speaker Diarization & Identification.
Rémi Uro | David Doukhan | Albert Rilliard | Laetitia Larcher | Anissa-Claire Adgharouamane | Marie Tahon | Antoine Laurent
Proceedings of the Thirteenth Language Resources and Evaluation Conference

This paper presents a semi-automatic approach to create a diachronic corpus of voices balanced for speaker’s age, gender, and recording period, according to 32 categories (2 genders, 4 age ranges and 4 recording periods). Corpora were selected at French National Institute of Audiovisual (INA) to obtain at least 30 speakers per category (a total of 960 speakers; only 874 have be found yet). For each speaker, speech excerpts were extracted from audiovisual documents using an automatic pipeline consisting of speech detection, background music and overlapped speech removal and speaker diarization, used to present clean speaker segments to human annotators identifying target speakers. This pipeline proved highly effective, cutting down manual processing by a factor of ten. Evaluation of the quality of the automatic processing and of the final output is provided. It shows the automatic processing compare to up-to-date process, and that the output provides high quality speech for most of the selected excerpts. This method is thus recommendable for creating large corpora of known target speakers.

pdf bib
Using ASR-Generated Text for Spoken Language Modeling
Nicolas Hervé | Valentin Pelloin | Benoit Favre | Franck Dary | Antoine Laurent | Sylvain Meignier | Laurent Besacier
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models

This papers aims at improving spoken language modeling (LM) using very large amount of automatically transcribed speech. We leverage the INA (French National Audiovisual Institute) collection and obtain 19GB of text after applying ASR on 350,000 hours of diverse TV shows. From this, spoken language models are trained either by fine-tuning an existing LM (FlauBERT) or through training a LM from scratch. The new models (FlauBERT-Oral) will be shared with the community and are evaluated not only in terms of word prediction accuracy but also for two downstream tasks : classification of TV shows and syntactic parsing of speech. Experimental results show that FlauBERT-Oral is better than its initial FlauBERT version demonstrating that, despite its inherent noisy nature, ASR-Generated text can be useful to improve spoken language modeling.

pdf bib
ON-TRAC Consortium Systems for the IWSLT 2022 Dialect and Low-resource Speech Translation Tasks
Marcely Zanon Boito | John Ortega | Hugo Riguidel | Antoine Laurent | Loïc Barrault | Fethi Bougares | Firas Chaabani | Ha Nguyen | Florentin Barbier | Souhir Gahbiche | Yannick Estève
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)

This paper describes the ON-TRAC Consortium translation systems developed for two challenge tracks featured in the Evaluation Campaign of IWSLT 2022: low-resource and dialect speech translation. For the Tunisian Arabic-English dataset (low-resource and dialect tracks), we build an end-to-end model as our joint primary submission, and compare it against cascaded models that leverage a large fine-tuned wav2vec 2.0 model for ASR. Our results show that in our settings pipeline approaches are still very competitive, and that with the use of transfer learning, they can outperform end-to-end models for speech translation (ST). For the Tamasheq-French dataset (low-resource track) our primary submission leverages intermediate representations from a wav2vec 2.0 model trained on 234 hours of Tamasheq audio, while our contrastive model uses a French phonetic transcription of the Tamasheq audio as input in a Conformer speech translation architecture jointly trained on automatic speech recognition, ST and machine translation losses. Our results highlight that self-supervised models trained on smaller sets of target data are more effective to low-resource end-to-end ST fine-tuning, compared to large off-the-shelf models. Results also illustrate that even approximate phonetic transcriptions can improve ST scores.

2020

pdf bib
A Multimodal Educational Corpus of Oral Courses: Annotation, Analysis and Case Study
Salima Mdhaffar | Yannick Estève | Antoine Laurent | Nicolas Hernandez | Richard Dufour | Delphine Charlet | Geraldine Damnati | Solen Quiniou | Nathalie Camelin
Proceedings of the Twelfth Language Resources and Evaluation Conference

This corpus is part of the PASTEL (Performing Automated Speech Transcription for Enhancing Learning) project aiming to explore the potential of synchronous speech transcription and application in specific teaching situations. It includes 10 hours of different lectures, manually transcribed and segmented. The main interest of this corpus lies in its multimodal aspect: in addition to speech, the courses were filmed and the written presentation supports (slides) are made available. The dataset may then serve researches in multiple fields, from speech and language to image and video processing. The dataset will be freely available to the research community. In this paper, we first describe in details the annotation protocol, including a detailed analysis of the manually labeled data. Then, we propose some possible use cases of the corpus with baseline results. The use cases concern scientific fields from both speech and text processing, with language model adaptation, thematic segmentation and transcription to slide alignment.

pdf bib
Where are we in Named Entity Recognition from Speech?
Antoine Caubrière | Sophie Rosset | Yannick Estève | Antoine Laurent | Emmanuel Morin
Proceedings of the Twelfth Language Resources and Evaluation Conference

Named entity recognition (NER) from speech is usually made through a pipeline process that consists in (i) processing audio using an automatic speech recognition system (ASR) and (ii) applying a NER to the ASR outputs. The latest data available for named entity extraction from speech in French were produced during the ETAPE evaluation campaign in 2012. Since the publication of ETAPE’s campaign results, major improvements were done on NER and ASR systems, especially with the development of neural approaches for both of these components. In addition, recent studies have shown the capability of End-to-End (E2E) approach for NER / SLU tasks. In this paper, we propose a study of the improvements made in speech recognition and named entity recognition for pipeline approaches. For this type of systems, we propose an original 3-pass approach. We also explore the capability of an E2E system to do structured NER. Finally, we compare the performances of ETAPE’s systems (state-of-the-art systems in 2012) with the performances obtained using current technologies. The results show the interest of the E2E approach, which however remains below an updated pipeline approach.

pdf bib
Où en sommes-nous dans la reconnaissance des entités nommées structurées à partir de la parole ? (Where are we in Named Entity Recognition from speech ?)
Antoine Caubrière | Sophie Rosset | Yannick Estève | Antoine Laurent | Emmanuel Morin
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 1 : Journées d'Études sur la Parole

La reconnaissance des entités nommées (REN) à partir de la parole est traditionnellement effectuée par l’intermédiaire d’une chaîne de composants, exploitant un système de reconnaissance de la parole (RAP), puis un système de REN appliqué sur les transcriptions automatiques. Les dernières données disponibles pour la REN structurées à partir de la parole en français proviennent de la campagne d’évaluation ETAPE en 2012. Depuis la publication des résultats, des améliorations majeures ont été réalisées pour les systèmes de REN et de RAP. Notamment avec le développement des systèmes neuronaux. De plus, certains travaux montrent l’intérêt des approches de bout en bout pour la tâche de REN dans la parole. Nous proposons une étude des améliorations en RAP et REN dans le cadre d’une chaîne de composants, ainsi qu’une nouvelle approche en trois étapes. Nous explorons aussi les capacités d’une approche bout en bout pour la REN structurées. Enfin, nous comparons ces deux types d’approches à l’état de l’art de la campagne ETAPE. Nos résultats montrent l’intérêt de l’approche bout en bout, qui reste toutefois en deçà d’une chaîne de composants entièrement mise à jour.

2019

pdf bib
Curriculum d’apprentissage : reconnaissance d’entités nommées pour l’extraction de concepts sémantiques (Curriculum learning : named entity recognition for semantic concept extraction)
Antoine Caubrière | Natalia Tomashenko | Yannick Estève | Antoine Laurent | Emmanuel Morin
Actes de la Conférence sur le Traitement Automatique des Langues Naturelles (TALN) PFIA 2019. Volume I : Articles longs

Dans cet article, nous présentons une approche de bout en bout d’extraction de concepts sémantiques de la parole. En particulier, nous mettons en avant l’apport d’une chaîne d’apprentissage successif pilotée par une stratégie de curriculum d’apprentissage. Dans la chaîne d’apprentissage mise en place, nous exploitons des données françaises annotées en entités nommées que nous supposons être des concepts plus génériques que les concepts sémantiques liés à une application informatique spécifique. Dans cette étude, il s’agit d’extraire des concepts sémantiques dans le cadre de la tâche MEDIA. Pour renforcer le système proposé, nous exploitons aussi des stratégies d’augmentation de données, un modèle de langage 5-gramme, ainsi qu’un mode étoile aidant le système à se concentrer sur les concepts et leurs valeurs lors de l’apprentissage. Les résultats montrent un intérêt à l’utilisation des données d’entités nommées, permettant un gain relatif allant jusqu’à 6,5 %.

pdf bib
Apport de l’adaptation automatique des modèles de langage pour la reconnaissance de la parole: évaluation qualitative extrinsèque dans un contexte de traitement de cours magistraux (Contribution of automatic adaptation of language models for speech recognition : extrinsic qualitative evaluation in a context of educational courses)
Salima Mdhaffar | Yannick Estève | Nicolas Hernandez | Antoine Laurent | Solen Quiniou
Actes de la Conférence sur le Traitement Automatique des Langues Naturelles (TALN) PFIA 2019. Volume II : Articles courts

Malgré les faiblesses connues de cette métrique, les performances de différents systèmes de reconnaissance automatique de la parole sont généralement comparées à l’aide du taux d’erreur sur les mots. Les transcriptions automatiques de ces systèmes sont de plus en plus exploitables et utilisées dans des systèmes complexes de traitement automatique du langage naturel, par exemple pour la traduction automatique, l’indexation, la recherche documentaire... Des études récentes ont proposé des métriques permettant de comparer la qualité des transcriptions automatiques de différents systèmes en fonction de la tâche visée. Dans cette étude nous souhaitons mesurer, qualitativement, l’apport de l’adaptation automatique des modèles de langage au domaine visé par un cours magistral. Les transcriptions du discours de l’enseignant peuvent servir de support à la navigation dans le document vidéo du cours magistral ou permettre l’enrichissement de son contenu pédagogique. C’est à-travers le prisme de ces deux tâches que nous évaluons l’apport de l’adaptation du modèle de langage. Les expériences ont été menées sur un corpus de cours magistraux et montrent combien le taux d’erreur sur les mots est une métrique insuffisante qui masque les apports effectifs de l’adaptation des modèles de langage.

2018

pdf bib
Le corpus PASTEL pour le traitement automatique de cours magistraux (PASTEL corpus for automatic processing of lectures)
Salima Mdhaffar | Antoine Laurent | Yannick Estève
Actes de la Conférence TALN. Volume 1 - Articles longs, articles courts de TALN

Le projet PASTEL étudie l’acceptabilité et l’utilisabilité des transcriptions automatiques dans le cadre d’enseignements magistraux. Il s’agit d’outiller les apprenants pour enrichir de manière synchrone et automatique les informations auxquelles ils peuvent avoir accès durant la séance. Cet enrichissement s’appuie sur des traitements automatiques du langage naturel effectués sur les transcriptions automatiques. Nous présentons dans cet article un travail portant sur l’annotation d’enregistrements de cours magistraux enregistrés dans le cadre du projet CominOpenCourseware. Ces annotations visent à effectuer des expériences de transcription automatique, segmentation thématique, appariement automatique en temps réel avec des ressources externes... Ce corpus comprend plus de neuf heures de parole annotées. Nous présentons également des expériences préliminaires réalisées pour évaluer l’adaptation automatique de notre système de reconnaissance de la parole.

2012

pdf bib
Combinaison d’approches pour la reconnaissance du rôle des locuteurs (Combination of approaches for speaker role recognition) [in French]
Richard Dufour | Antoine Laurent | Yannick Estève
Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, volume 1: JEP

2008

pdf bib
Combined Systems for Automatic Phonetic Transcription of Proper Nouns
Antoine Laurent | Téva Merlin | Sylvain Meignier | Yannick Estève | Paul Deléglise
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Large vocabulary automatic speech recognition (ASR) technologies perform well in known, controlled contexts. However recognition of proper nouns is commonly considered as a difficult task. Accurate phonetic transcription of a proper noun is difficult to obtain, although it can be one of the most important resources for a recognition system. In this article, we propose methods of automatic phonetic transcription applied to proper nouns. The methods are based on combinations of the rule-based phonetic transcription generator LIA_PHON and an acoustic-phonetic decoding system. On the ESTER corpus, we observed that the combined systems obtain better results than our reference system (LIA_PHON). The WER (Word Error Rate) decreased on segments of speech containing proper nouns, without affecting negatively the results on the rest of the corpus. On the same corpus, the Proper Noun Error Rate (PNER, which is a WER computed on proper nouns only), decreased with our new system.