2022
pdf
bib
abs
The Spoken Language Understanding MEDIA Benchmark Dataset in the Era of Deep Learning: data updates, training and evaluation tools
Gaëlle Laperrière
|
Valentin Pelloin
|
Antoine Caubrière
|
Salima Mdhaffar
|
Nathalie Camelin
|
Sahar Ghannay
|
Bassam Jabaian
|
Yannick Estève
Proceedings of the Thirteenth Language Resources and Evaluation Conference
With the emergence of neural end-to-end approaches for spoken language understanding (SLU), a growing number of studies have been presented during these last three years on this topic. The major part of these works addresses the spoken language understanding domain through a simple task like speech intent detection. In this context, new benchmark datasets have also been produced and shared with the community related to this task. In this paper, we focus on the French MEDIA SLU dataset, distributed since 2005 and used as a benchmark dataset for a large number of research works. This dataset has been shown as being the most challenging one among those accessible to the research community. Distributed by ELRA, this corpus is free for academic research since 2019. Unfortunately, the MEDIA dataset is not really used beyond the French research community. To facilitate its use, a complete recipe, including data preparation, training and evaluation scripts, has been built and integrated to SpeechBrain, an already popular open-source and all-in-one conversational AI toolkit based on PyTorch. This recipe is presented in this paper. In addition, based on the feedback of some researchers who have worked on this dataset for several years, some corrections have been brought to the initial manual annotation: the new version of the data will also be integrated into the ELRA catalogue, as the original one. More, a significant amount of data collected during the construction of the MEDIA corpus in the 2000s was never used until now: we present the first results reached on this subset — also included in the MEDIA SpeechBrain recipe — , that will be used for now as the MEDIA test2. Last, we discuss evaluation issues.
pdf
bib
abs
Impact Analysis of the Use of Speech and Language Models Pretrained by Self-Supersivion for Spoken Language Understanding
Salima Mdhaffar
|
Valentin Pelloin
|
Antoine Caubrière
|
Gaëlle Laperriere
|
Sahar Ghannay
|
Bassam Jabaian
|
Nathalie Camelin
|
Yannick Estève
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Pretrained models through self-supervised learning have been recently introduced for both acoustic and language modeling. Applied to spoken language understanding tasks, these models have shown their great potential by improving the state-of-the-art performances on challenging benchmark datasets. In this paper, we present an error analysis reached by the use of such models on the French MEDIA benchmark dataset, known as being one of the most challenging benchmarks for the slot filling task among all the benchmarks accessible to the entire research community. One year ago, the state-of-art system reached a Concept Error Rate (CER) of 13.6% through the use of a end-to-end neural architecture. Some months later, a cascade approach based on the sequential use of a fine-tuned wav2vec2.0 model and a fine-tuned BERT model reaches a CER of 11.2%. This significant improvement raises questions about the type of errors that remain difficult to treat, but also about those that have been corrected using these models pre-trained through self-supervision learning on a large amount of data. This study brings some answers in order to better understand the limits of such models and open new perspectives to continue improving the performance.
pdf
bib
abs
Using ASR-Generated Text for Spoken Language Modeling
Nicolas Hervé
|
Valentin Pelloin
|
Benoit Favre
|
Franck Dary
|
Antoine Laurent
|
Sylvain Meignier
|
Laurent Besacier
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
This papers aims at improving spoken language modeling (LM) using very large amount of automatically transcribed speech. We leverage the INA (French National Audiovisual Institute) collection and obtain 19GB of text after applying ASR on 350,000 hours of diverse TV shows. From this, spoken language models are trained either by fine-tuning an existing LM (FlauBERT) or through training a LM from scratch. The new models (FlauBERT-Oral) will be shared with the community and are evaluated not only in terms of word prediction accuracy but also for two downstream tasks : classification of TV shows and syntactic parsing of speech. Experimental results show that FlauBERT-Oral is better than its initial FlauBERT version demonstrating that, despite its inherent noisy nature, ASR-Generated text can be useful to improve spoken language modeling.
2020
pdf
bib
abs
Apprentissage de plongements de mots sur des corpus en langue de spécialité : une étude d’impact (Learning word embeddings on domain specific corpora : an impact study )
Valentin Pelloin
|
Thibault Prouteau
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 3 : Rencontre des Étudiants Chercheurs en Informatique pour le TAL
Les méthodes d’apprentissage de plongements lexicaux constituent désormais l’état de l’art pour la représentation du vocabulaire et des documents sous forme de vecteurs dans de nombreuses tâches de Traitement Automatique du Langage Naturel (TALN). Dans ce travail, nous considérons l’apprentissage et l’usage de plongements lexicaux dans le cadre de corpus en langue de spécialité de petite taille. En particulier, nous souhaitons savoir si dans ce cadre, il est préférable d’utiliser des plongements préappris sur des corpus très volumineux tels Wikipédia ou bien s’il est préférable d’apprendre des plongements sur ces corpus en langue de spécialité. Pour répondre à cette question, nous considérons deux corpus en langue de spécialité : O HSUMED issu du domaine médical, et un corpus de documentation technique, propriété de SNCF. Après avoir introduit ces corpus et évalué leur spécificité, nous définissons une tâche de classification. Pour cette tâche, nous choisissons d’utiliser en entrée d’un classifieur neuronal des représentations des documents qui sont soit basées sur des plongements appris sur les corpus de spécialité, soit sur des plongements appris sur Wikipédia. Notre analyse montre que les plongements appris sur Wikipédia fournissent de très bons résultats. Ceux-ci peuvent être utilisés comme une référence fiable, même si dans le cas d’O HSUMED, il vaut mieux apprendre des plongements sur ce même corpus. La discussion des résultats se fait en interrogeant les spécificités des deux corpus, mais ne permet pas d’établir clairement dans quels cas apprendre des plongements spécifiques au corpus.