Irina Illina


2022

pdf bib
Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection
Tulika Bose | Nikolaos Aletras | Irina Illina | Dominique Fohr
Findings of the Association for Computational Linguistics: ACL 2022

Hate speech classifiers exhibit substantial performance degradation when evaluated on datasets different from the source. This is due to learning spurious correlations between words that are not necessarily relevant to hateful language, and hate speech labels from the training corpus. Previous work has attempted to mitigate this problem by regularizing specific terms from pre-defined static dictionaries. While this has been demonstrated to improve the generalizability of classifiers, the coverage of such methods is limited and the dictionaries require regular manual updates from human experts. In this paper, we propose to automatically identify and reduce spurious correlations using attribution methods with dynamic refinement of the list of terms that need to be regularized during training. Our approach is flexible and improves the cross-corpora performance over previous work independently and in combination with pre-defined dictionaries.

pdf bib
Identification of Multiword Expressions in Tweets for Hate Speech Detection
Nicolas Zampieri | Carlos Ramisch | Irina Illina | Dominique Fohr
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Multiword expression (MWE) identification in tweets is a complex task due to the complex linguistic nature of MWEs combined with the non-standard language use in social networks. MWE features were shown to be helpful for hate speech detection (HSD). In this article, we present joint experiments on these two related tasks on English Twitter data: first we focus on the MWE identification task, and then we observe the influence of MWE-based features on the HSD task. For MWE identification, we compare the performance of two systems: lexicon-based and deep neural networks-based (DNN). We experimentally evaluate seven configurations of a state-of-the-art DNN system based on recurrent networks using pre-trained contextual embeddings from BERT. The DNN-based system outperforms the lexicon-based one thanks to its superior generalisation power, yielding much better recall. For the HSD task, we propose a new DNN architecture for incorporating MWE features. We confirm that MWE features are helpful for the HSD task. Moreover, the proposed DNN architecture beats previous MWE-based HSD systems by 0.4 to 1.1 F-measure points on average on four Twitter HSD corpora.

pdf bib
Transformer versus LSTM Language Models trained on Uncertain ASR Hypotheses in Limited Data Scenarios
Imran Sheikh | Emmanuel Vincent | Irina Illina
Proceedings of the Thirteenth Language Resources and Evaluation Conference

In several ASR use cases, training and adaptation of domain-specific LMs can only rely on a small amount of manually verified text transcriptions and sometimes a limited amount of in-domain speech. Training of LSTM LMs in such limited data scenarios can benefit from alternate uncertain ASR hypotheses, as observed in our recent work. In this paper, we propose a method to train Transformer LMs on ASR confusion networks. We evaluate whether these self-attention based LMs are better at exploiting alternate ASR hypotheses as compared to LSTM LMs. Evaluation results show that Transformer LMs achieve 3-6% relative reduction in perplexity on the AMI scenario meetings but perform similar to LSTM LMs on the smaller Verbmobil conversational corpus. Evaluation on ASR N-best rescoring shows that LSTM and Transformer LMs trained on ASR confusion networks do not bring significant WER reductions. However, a qualitative analysis reveals that they are better at predicting less frequent words.

pdf bib
Placing M-Phasis on the Plurality of Hate: A Feature-Based Corpus of Hate Online
Dana Ruiter | Liane Reiners | Ashwin Geet D’Sa | Thomas Kleinbauer | Dominique Fohr | Irina Illina | Dietrich Klakow | Christian Schemer | Angeliki Monnier
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Even though hate speech (HS) online has been an important object of research in the last decade, most HS-related corpora over-simplify the phenomenon of hate by attempting to label user comments as “hate” or “neutral”. This ignores the complex and subjective nature of HS, which limits the real-life applicability of classifiers trained on these corpora. In this study, we present the M-Phasis corpus, a corpus of ~9k German and French user comments collected from migration-related news articles. It goes beyond the “hate”-“neutral” dichotomy and is instead annotated with 23 features, which in combination become descriptors of various types of speech, ranging from critical comments to implicit and explicit expressions of hate. The annotations are performed by 4 native speakers per language and achieve high (0.77 <= k <= 1) inter-annotator agreements. Besides describing the corpus creation and presenting insights from a content, error and domain analysis, we explore its data characteristics by training several classification baselines.

pdf bib
Transferring Knowledge via Neighborhood-Aware Optimal Transport for Low-Resource Hate Speech Detection
Tulika Bose | Irina Illina | Dominique Fohr
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

The concerning rise of hateful content on online platforms has increased the attention towards automatic hate speech detection, commonly formulated as a supervised classification task. State-of-the-art deep learning-based approaches usually require a substantial amount of labeled resources for training. However, annotating hate speech resources is expensive, time-consuming, and often harmful to the annotators. This creates a pressing need to transfer knowledge from the existing labeled resources to low-resource hate speech corpora with the goal of improving system performance. For this, neighborhood-based frameworks have been shown to be effective. However, they have limited flexibility. In our paper, we propose a novel training strategy that allows flexible modeling of the relative proximity of neighbors retrieved from a resource-rich corpus to learn the amount of transfer. In particular, we incorporate neighborhood information with Optimal Transport, which permits exploiting the geometry of the data embedding space. By aligning the joint embedding and label distributions of neighbors, we demonstrate substantial improvements over strong baselines, in low-resource scenarios, on different publicly available hate speech corpora.

pdf bib
Identification des Expressions Polylexicales dans les Tweets (Identification of Multiword Expressions in Tweets)
Nicolas Zampieri | Carlos Ramisch | Irina Illina | Dominique Fohr
Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale

L’identification des expressions polylexicales (EP) dans les tweets est une tâche difficile en raison de la nature linguistique complexe des EP combinée à l’utilisation d’un langage non standard. Dans cet article, nous présentons cette tâche d’identification sur des données anglaises de Twitter. Nous comparons les performances de deux systèmes : un utilisant un dictionnaire et un autre des réseaux de neurones. Nous évaluons expérimentalement sept configurations d’un système état de l’art fondé sur des réseaux neuronaux récurrents utilisant des embeddings contextuels générés par BERT. Le système fondé sur les réseaux neuronaux surpasse l’approche dictionnaire, collecté automatiquement à partir des EP dans des corpus, grâce à son pouvoir de généralisation supérieur.

pdf bib
Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection
Tulika Bose | Nikolaos Aletras | Irina Illina | Dominique Fohr
Proceedings of the 29th International Conference on Computational Linguistics

State-of-the-art approaches for hate-speech detection usually exhibit poor performance in out-of-domain settings. This occurs, typically, due to classifiers overemphasizing source-specific information that negatively impacts its domain invariance. Prior work has attempted to penalize terms related to hate-speech from manually curated lists using feature attribution methods, which quantify the importance assigned to input terms by the classifier when making a prediction. We, instead, propose a domain adaptation approach that automatically extracts and penalizes source-specific terms using a domain classifier, which learns to differentiate between domains, and feature-attribution scores for hate-speech classes, yielding consistent improvements in cross-domain evaluation.

2021

pdf bib
Generalisability of Topic Models in Cross-corpora Abusive Language Detection
Tulika Bose | Irina Illina | Dominique Fohr
Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda

Rapidly changing social media content calls for robust and generalisable abuse detection models. However, the state-of-the-art supervised models display degraded performance when they are evaluated on abusive comments that differ from the training corpus. We investigate if the performance of supervised models for cross-corpora abuse detection can be improved by incorporating additional information from topic models, as the latter can infer the latent topic mixtures from unseen samples. In particular, we combine topical information with representations from a model tuned for classifying abusive comments. Our performance analysis reveals that topic models are able to capture abuse-related topics that can transfer across corpora, and result in improved generalisability.

pdf bib
Unsupervised Domain Adaptation in Cross-corpora Abusive Language Detection
Tulika Bose | Irina Illina | Dominique Fohr
Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media

The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models that can adapt to newly collected comments can prove to be useful. In this paper, we investigate the effectiveness of several Unsupervised Domain Adaptation (UDA) approaches for the task of cross-corpora abusive language detection. In comparison, we adapt a variant of the BERT model, trained on large-scale abusive comments, using Masked Language Model (MLM) fine-tuning. Our evaluation shows that the UDA approaches result in sub-optimal performance, while the MLM fine-tuning does better in the cross-corpora setting. Detailed analysis reveals the limitations of the UDA approaches and emphasizes the need to build efficient adaptation methods for this task.

2020

pdf bib
Towards Non-Toxic Landscapes: Automatic Toxic Comment Detection Using DNN
Ashwin Geet D’Sa | Irina Illina | Dominique Fohr
Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying

The spectacular expansion of the Internet has led to the development of a new research problem in the field of natural language processing: automatic toxic comment detection, since many countries prohibit hate speech in public media. There is no clear and formal definition of hate, offensive, toxic and abusive speeches. In this article, we put all these terms under the umbrella of “toxic speech”. The contribution of this paper is the design of binary classification and regression-based approaches aiming to predict whether a comment is toxic or not. We compare different unsupervised word representations and different DNN based classifiers. Moreover, we study the robustness of the proposed approaches to adversarial attacks by adding one (healthy or toxic) word. We evaluate the proposed methodology on the English Wikipedia Detox corpus. Our experiments show that using BERT fine-tuning outperforms feature-based BERT, Mikolov’s and fastText representations with different DNN classifiers.

pdf bib
Label Propagation-Based Semi-Supervised Learning for Hate Speech Classification
Ashwin Geet D’Sa | Irina Illina | Dominique Fohr | Dietrich Klakow | Dana Ruiter
Proceedings of the First Workshop on Insights from Negative Results in NLP

Research on hate speech classification has received increased attention. In real-life scenarios, a small amount of labeled hate speech data is available to train a reliable classifier. Semi-supervised learning takes advantage of a small amount of labeled data and a large amount of unlabeled data. In this paper, label propagation-based semi-supervised learning is explored for the task of hate speech classification. The quality of labeling the unlabeled set depends on the input representations. In this work, we show that pre-trained representations are label agnostic, and when used with label propagation yield poor results. Neural network-based fine-tuning can be adopted to learn task-specific representations using a small amount of labeled data. We show that fully fine-tuned representations may not always be the best representations for the label propagation and intermediate representations may perform better in a semi-supervised setup.

pdf bib
Reconnaissance automatique de la parole : génération des prononciations non natives pour l’enrichissement du lexique (In this study we propose a method for lexicon adaptation in order to improve the automatic speech recognition (ASR) of non-native speakers)
Ismael Bada | Dominique Fohr | Irina Illina
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 1 : Journées d'Études sur la Parole

Dans cet article nous proposons une méthode d’adaptation du lexique, destinée à améliorer les systèmes de la reconnaissance automatique de la parole (SRAP) des locuteurs non natifs. En effet, la reconnaissance automatique souffre d’une chute significative de ses performances quand elle est utilisée pour reconnaître la parole des locuteurs non natifs, car les phonèmes de la langue étrangère sont fréquemment mal prononcés par ces locuteurs. Pour prendre en compte ce problème de prononciations erronées, notre approche propose d’intégrer les prononciations non natives dans le lexique et par la suite d’utiliser ce lexique enrichi pour la reconnaissance. Pour réaliser notre approche nous avons besoin d’un petit corpus de parole non native et de sa transcription. Pour générer les prononciations non natives, nous proposons de tenir compte des correspondances graphèmes-phonèmes en vue de générer de manière automatique des règles de création de nouvelles prononciations. Ces nouvelles prononciations seront ajoutées au lexique. Nous présentons une évaluation de notre méthode sur un corpus de locuteurs non natifs français s’exprimant en anglais.

pdf bib
Adaptation de domaine non supervisée pour la reconnaissance de la langue par régularisation d’un réseau de neurones (Unsupervised domain adaptation for language identification by regularization of a neural network)
Raphaël Duroselle | Denis Jouvet | Irina Illina
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 1 : Journées d'Études sur la Parole

Les systèmes automatiques d’identification de la langue subissent une dégradation importante de leurs performances quand les caractéristiques acoustiques des signaux de test diffèrent fortement des caractéristiques des données d’entraînement. Dans cet article, nous étudions l’adaptation de domaine non supervisée d’un système entraîné sur des conversations téléphoniques à des transmissions radio. Nous présentons une méthode de régularisation d’un réseau de neurones consistant à ajouter à la fonction de coût un terme mesurant la divergence entre les deux domaines. Des expériences sur le corpus OpenSAD15 nous permettent de sélectionner la Maximum Mean Discrepancy pour réaliser cette mesure. Cette approche est ensuite appliquée à un système moderne d’identification de la langue reposant sur des x-vectors. Sur le corpus RATS, pour sept des huit canaux radio étudiés, l’approche permet, sans utiliser de données annotées du domaine cible, de surpasser la performance d’un système entraîné de façon supervisée avec des données annotées de ce domaine.

pdf bib
Introduction d’informations sémantiques dans un système de reconnaissance de la parole (Despite spectacular advances in recent years, the Automatic Speech Recognition (ASR) systems still make mistakes, especially in noisy environments)
Stéphane Level | Irina Illina | Dominique Fohr
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 1 : Journées d'Études sur la Parole

Malgré les avancés spectaculaires ces dernières années, les systèmes de Reconnaissance Automatique de Parole (RAP) commettent encore des erreurs, surtout dans des environnements bruités. Pour améliorer la RAP, nous proposons de se diriger vers une contextualisation d’un système RAP, car les informations sémantiques sont importantes pour la performance de la RAP. Les systèmes RAP actuels ne prennent en compte principalement que les informations lexicales et syntaxiques. Pour modéliser les informations sémantiques, nous proposons de détecter les mots de la phrase traitée qui pourraient avoir été mal reconnus et de proposer des mots correspondant mieux au contexte. Cette analyse sémantique permettra de réévaluer les N meilleures hypothèses de transcription (N-best). Nous utilisons les embeddings Word2Vec et BERT. Nous avons évalué notre méthodologie sur le corpus des conférences TED (TED-LIUM). Les résultats montrent une amélioration significative du taux d’erreur mots en utilisant la méthodologie proposée.

2016

pdf bib
Learning Word Importance with the Neural Bag-of-Words Model
Imran Sheikh | Irina Illina | Dominique Fohr | Georges Linarès
Proceedings of the 1st Workshop on Representation Learning for NLP

pdf bib
How Diachronic Text Corpora Affect Context based Retrieval of OOV Proper Names for Audio News
Imran Sheikh | Irina Illina | Dominique Fohr
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Out-Of-Vocabulary (OOV) words missed by Large Vocabulary Continuous Speech Recognition (LVCSR) systems can be recovered with the help of topic and semantic context of the OOV words captured from a diachronic text corpus. In this paper we investigate how the choice of documents for the diachronic text corpora affects the retrieval of OOV Proper Names (PNs) relevant to an audio document. We first present our diachronic French broadcast news datasets, which highlight the motivation of our study on OOV PNs. Then the effect of using diachronic text data from different sources and a different time span is analysed. With OOV PN retrieval experiments on French broadcast news videos, we conclude that a diachronic corpus with text from different sources leads to better retrieval performance than one relying on text from single source or from a longer time span.

2014

pdf bib
Ajout de nouveaux noms propres au vocabulaire d’un système de transcription en utilisant un corpus diachronique [Adding proper names to the vocabulary of a speech transcription system using a contemporary diachronic corpus]
Irina Illina | Dominique Fohr | Georges Linarès
Traitement Automatique des Langues, Volume 55, Numéro 2 : Traitement automatique du langage parlé [Spoken language processing]

2012

pdf bib
Détection de transcriptions incorrectes de parole non-native dans le cadre de l’apprentissage de langues étrangères (Detection of incorrect transcriptions of non-native speech in the context of foreign language learning) [in French]
Luiza Orosanu | Denis Jouvet | Dominique Fohr | Irina Illina | Anne Bonneau
Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, volume 1: JEP

pdf bib
Génération des prononciations de noms propres à l’aide des Champs Aléatoires Conditionnels (Pronunciation generation for proper names using Conditional Random Fields) [in French]
Irina Illina | Dominique Fohr | Denis Jouvet
Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, volume 1: JEP

pdf bib
Gestion d’erreurs pour la fiabilisation des retours automatiques en apprentissage de la prosodie d’une langue seconde [Handling of errors for increasing automatic feedback reliability in foreign language prosody learning]
Anne Bonneau | Dominique Fohr | Irina Illina | Denis Jouvet | Odile Mella | Larbi Mesbahi | Luiza Orosanu
Traitement Automatique des Langues, Volume 53, Numéro 3 : Du bruit dans le signal : gestion des erreurs en traitement automatique des langues [Managing noise in the signal: Error handling in natural language processing]