2024
pdf
bib
abs
Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks
Anas Himmi
|
Ekhine Irurozki
|
Nathan Noiry
|
Stephan Clémençon
|
Pierre Colombo
Findings of the Association for Computational Linguistics: EMNLP 2024
The evaluation of natural language processing (NLP) systems is crucial for advancing the field, but current benchmarking approaches often assume that all systems have scores available for all tasks, which is not always practical. In reality, several factors such as the cost of running baseline, private systems, computational limitations, or incomplete data may prevent some systems from being evaluated on entire tasks. This paper formalize an existing problem in NLP research: benchmarking when some systems scores are missing on the task, and proposes a novel approach to address it. Our method utilizes a compatible partial ranking approach to impute missing data, which is then aggregated using the Borda count method. It includes two refinements designed specifically for scenarios where either task-level or instance-level scores are available. We also introduce an extended benchmark, which contains over 131 million scores, an order of magnitude larger than existing benchmarks. We validate our methods and demonstrate their effectiveness in addressing the challenge of missing system evaluation on an entire task. This work highlights the need for more comprehensive benchmarking approaches that can handle real-world scenarios where not all systems are evaluated on the entire task.
2023
pdf
bib
abs
Toward Stronger Textual Attack Detectors
Pierre Colombo
|
Marine Picot
|
Nathan Noiry
|
Guillaume Staerman
|
Pablo Piantanida
Findings of the Association for Computational Linguistics: EMNLP 2023
The landscape of available textual adversarial attacks keeps growing, posing severe threats and raising concerns regarding deep NLP systems integrity. However, the crucial problem of defending against malicious attacks has only drawn few attention in the NLP community. The latter is nonetheless instrumental to develop robust and trustworthy systems. This paper makes two important contributions in this line of search: (i) we introduce LAROUSSE, a new framework to detect textual adversarial attacks and (ii) we introduce STAKEOUT, an extended benchmark composed of nine popular attack methods, three datasets and two pre-trained models. LAROUSSE is ready-to-use in production as it is unsupervised, hyperparameter free and non-differentiable, protecting it against gradient-based methods. Our new benchmark STAKEOUT allows for a robust evaluation framework: we conduct extensive numerical experiments which demonstrate that LAROUSSE outperforms previous methods, and which allows to identify interesting factor of detection rate variations.
pdf
bib
The Glass Ceiling of Automatic Evaluation in Natural Language Generation
Pierre Colombo
|
Maxime Peyrard
|
Nathan Noiry
|
Robert West
|
Pablo Piantanida
Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings)
pdf
bib
A Novel Information Theoretic Objective to Disentangle Representations for Fair Classification
Pierre Colombo
|
Nathan Noiry
|
Guillaume Staerman
|
Pablo Piantanida
Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings)
2022
pdf
bib
abs
Learning Disentangled Textual Representations via Statistical Measures of Similarity
Pierre Colombo
|
Guillaume Staerman
|
Nathan Noiry
|
Pablo Piantanida
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
When working with textual data, a natural application of disentangled representations is the fair classification where the goal is to make predictions without being biased (or influenced) by sensible attributes that may be present in the data (e.g., age, gender or race). Dominant approaches to disentangle a sensitive attribute from textual representations rely on learning simultaneously a penalization term that involves either an adversary loss (e.g., a discriminator) or an information measure (e.g., mutual information). However, these methods require the training of a deep neural network with several parameter updates for each update of the representation model. As a matter of fact, the resulting nested optimization loop is both times consuming, adding complexity to the optimization dynamic, and requires a fine hyperparameter selection (e.g., learning rates, architecture). In this work, we introduce a family of regularizers for learning disentangled representations that do not require training. These regularizers are based on statistical measures of similarity between the conditional probability distributions with respect to the sensible attributes. Our novel regularizers do not require additional training, are faster and do not involve additional tuning while achieving better results both when combined with pretrained and randomly initialized text encoders.