Anas Himmi


2024

pdf bib
Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation
Anas Himmi | Guillaume Staerman | Marine Picot | Pierre Colombo | Nuno Guerreiro
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Hallucinated translations pose significant threats and safety concerns when it comes to practical deployment of machine translation systems. Previous research works have identified that detectors exhibit complementary performance — different detectors excel at detecting different types of hallucinations. In this paper, we propose to address the limitations of individual detectors by combining them and introducing a straightforward method for aggregating multiple detectors. Our results demonstrate the efficacy of our aggregated detector, providing a promising step towards evermore reliable machine translation systems.

pdf bib
Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks
Anas Himmi | Ekhine Irurozki | Nathan Noiry | Stephan Clémençon | Pierre Colombo
Findings of the Association for Computational Linguistics: EMNLP 2024

The evaluation of natural language processing (NLP) systems is crucial for advancing the field, but current benchmarking approaches often assume that all systems have scores available for all tasks, which is not always practical. In reality, several factors such as the cost of running baseline, private systems, computational limitations, or incomplete data may prevent some systems from being evaluated on entire tasks. This paper formalize an existing problem in NLP research: benchmarking when some systems scores are missing on the task, and proposes a novel approach to address it. Our method utilizes a compatible partial ranking approach to impute missing data, which is then aggregated using the Borda count method. It includes two refinements designed specifically for scenarios where either task-level or instance-level scores are available. We also introduce an extended benchmark, which contains over 131 million scores, an order of magnitude larger than existing benchmarks. We validate our methods and demonstrate their effectiveness in addressing the challenge of missing system evaluation on an entire task. This work highlights the need for more comprehensive benchmarking approaches that can handle real-world scenarios where not all systems are evaluated on the entire task.