2024
pdf
bib
abs
Jargon : Une suite de modèles de langues et de référentiels d’évaluation pour les domaines spécialisés du français
Vincent Segonne
|
Aidan Mannion
|
Laura Alonzo-Canul
|
Audibert Alexandre
|
Xingyu Liu
|
Cécile Macaire
|
Adrien Pupier
|
Yongxin Zhou
|
Mathilde Aguiar
|
Felix Herron
|
Magali Norré
|
Massih-Reza Amini
|
Pierrette Bouillon
|
Iris Eshkol Taravella
|
Emmanuelle Esparança-Rodier
|
Thomas François
|
Lorraine Goeuriot
|
Jérôme Goulian
|
Mathieu Lafourcade
|
Benjamin Lecouteux
|
François Portet
|
Fabien Ringeval
|
Vincent Vandeghinste
|
Maximin Coavoux
|
Marco Dinarelli
|
Didier Schwab
Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 2 : traductions d'articles publiès
Les modèles de langue préentraînés (PLM) constituent aujourd’hui de facto l’épine dorsale de la plupart des systèmes de traitement automatique des langues. Dans cet article, nous présentons Jargon, une famille de PLMs pour des domaines spécialisés du français, en nous focalisant sur trois domaines : la parole transcrite, le domaine clinique / biomédical, et le domaine juridique. Nous utilisons une architecture de transformeur basée sur des méthodes computationnellement efficaces(LinFormer) puisque ces domaines impliquent souvent le traitement de longs documents. Nous évaluons et comparons nos modèles à des modèles de l’état de l’art sur un ensemble varié de tâches et de corpus d’évaluation, dont certains sont introduits dans notre article. Nous rassemblons les jeux de données dans un nouveau référentiel d’évaluation en langue française pour ces trois domaines. Nous comparons également diverses configurations d’entraînement : préentraînement prolongé en apprentissage autosupervisé sur les données spécialisées, préentraînement à partir de zéro, ainsi que préentraînement mono et multi-domaines. Nos expérimentations approfondies dans des domaines spécialisés montrent qu’il est possible d’atteindre des performances compétitives en aval, même lors d’un préentraînement avec le mécanisme d’attention approximatif de LinFormer. Pour une reproductibilité totale, nous publions les modèles et les données de préentraînement, ainsi que les corpus utilisés.
pdf
bib
abs
Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains
Vincent Segonne
|
Aidan Mannion
|
Laura Cristina Alonzo Canul
|
Alexandre Daniel Audibert
|
Xingyu Liu
|
Cécile Macaire
|
Adrien Pupier
|
Yongxin Zhou
|
Mathilde Aguiar
|
Felix E. Herron
|
Magali Norré
|
Massih R Amini
|
Pierrette Bouillon
|
Iris Eshkol-Taravella
|
Emmanuelle Esperança-Rodier
|
Thomas François
|
Lorraine Goeuriot
|
Jérôme Goulian
|
Mathieu Lafourcade
|
Benjamin Lecouteux
|
François Portet
|
Fabien Ringeval
|
Vincent Vandeghinste
|
Maximin Coavoux
|
Marco Dinarelli
|
Didier Schwab
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Pretrained Language Models (PLMs) are the de facto backbone of most state-of-the-art NLP systems. In this paper, we introduce a family of domain-specific pretrained PLMs for French, focusing on three important domains: transcribed speech, medicine, and law. We use a transformer architecture based on efficient methods (LinFormer) to maximise their utility, since these domains often involve processing long documents. We evaluate and compare our models to state-of-the-art models on a diverse set of tasks and datasets, some of which are introduced in this paper. We gather the datasets into a new French-language evaluation benchmark for these three domains. We also compare various training configurations: continued pretraining, pretraining from scratch, as well as single- and multi-domain pretraining. Extensive domain-specific experiments show that it is possible to attain competitive downstream performance even when pre-training with the approximative LinFormer attention mechanism. For full reproducibility, we release the models and pretraining data, as well as contributed datasets.
pdf
bib
abs
PSentScore: Evaluating Sentiment Polarity in Dialogue Summarization
Yongxin Zhou
|
Fabien Ringeval
|
François Portet
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Automatic dialogue summarization is a well-established task with the goal of distilling the most crucial information from human conversations into concise textual summaries. However, most existing research has predominantly focused on summarizing factual information, neglecting the affective content, which can hold valuable insights for analyzing, monitoring, or facilitating human interactions. In this paper, we introduce and assess a set of measures PSentScore, aimed at quantifying the preservation of affective content in dialogue summaries. Our findings indicate that state-of-the-art summarization models do not preserve well the affective content within their summaries. Moreover, we demonstrate that a careful selection of the training set for dialogue samples can lead to improved preservation of affective content in the generated summaries, albeit with a minor reduction in content-related metrics.
2023
pdf
bib
abs
A Survey of Evaluation Methods of Generated Medical Textual Reports
Yongxin Zhou
|
Fabien Ringeval
|
François Portet
Proceedings of the 5th Clinical Natural Language Processing Workshop
Medical Report Generation (MRG) is a sub-task of Natural Language Generation (NLG) and aims to present information from various sources in textual form and synthesize salient information, with the goal of reducing the time spent by domain experts in writing medical reports and providing support information for decision-making. Given the specificity of the medical domain, the evaluation of automatically generated medical reports is of paramount importance to the validity of these systems. Therefore, in this paper, we focus on the evaluation of automatically generated medical reports from the perspective of automatic and human evaluation. We present evaluation methods for general NLG evaluation and how they have been applied to domain-specific medical tasks. The study shows that MRG evaluation methods are very diverse, and that further work is needed to build shared evaluation methods. The state of the art also emphasizes that such an evaluation must be task specific and include human assessments, requesting the participation of experts in the field.
2022
pdf
bib
abs
Effectiveness of French Language Models on Abstractive Dialogue Summarization Task
Yongxin Zhou
|
François Portet
|
Fabien Ringeval
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Pre-trained language models have established the state-of-the-art on various natural language processing tasks, including dialogue summarization, which allows the reader to quickly access key information from long conversations in meetings, interviews or phone calls. However, such dialogues are still difficult to handle with current models because the spontaneity of the language involves expressions that are rarely present in the corpora used for pre-training the language models. Moreover, the vast majority of the work accomplished in this field has been focused on English. In this work, we present a study on the summarization of spontaneous oral dialogues in French using several language specific pre-trained models: BARThez, and BelGPT-2, as well as multilingual pre-trained models: mBART, mBARThez, and mT5. Experiments were performed on the DECODA (Call Center) dialogue corpus whose task is to generate abstractive synopses from call center conversations between a caller and one or several agents depending on the situation. Results show that the BARThez models offer the best performance far above the previous state-of-the-art on DECODA. We further discuss the limits of such pre-trained models and the challenges that must be addressed for summarizing spontaneous dialogues.