Matthieu Dubois


2025

pdf bib
MOSAIC: Multiple Observers Spotting AI Content
Matthieu Dubois | François Yvon | Pablo Piantanida
Findings of the Association for Computational Linguistics: ACL 2025

The dissemination of Large Language Models (LLMs), trained at scale, and endowed with powerful text-generating abilities, has made it easier for all to produce harmful, toxic, faked or forged content. In response, various proposals have been made to automatically discriminate artificially generated from human-written texts, typically framing the problem as a binary classification problem. Early approaches evaluate an input document with a well-chosen detector LLM, assuming that low-perplexity scores reliably signal machine-made content. More recent systems instead consider two LLMs and compare their probability distributions over the document to further discriminate when perplexity alone cannot. However, using a fixed pair of models can induce brittleness in performance. We extend these approaches to the ensembling of several LLMs and derive a new, theoretically grounded approach to combine their respective strengths. Our experiments, using a variety of generator LLMs, suggest that this approach effectively harnesses each model’s capabilities, leading to strong detection performance on a variety of domains.

pdf bib
How Sampling Affects the Detectability of Machine-written texts: A Comprehensive Study
Matthieu Dubois | François Yvon | Pablo Piantanida
Findings of the Association for Computational Linguistics: EMNLP 2025

As texts generated by Large Language Models (LLMs) are ever more common and often indistinguishable from human-written content, research on automatic text detection has attracted growing attention. Many recent detectors report near-perfect accuracy, often boasting AUROC scores above 99%. However, these claims typically assume fixed generation settings, leaving open the question of how robust such systems are to changes in decoding strategies. In this work, we systematically examine how sampling-based decoding impacts detectability, with a focus on how subtle variations in a model’s (sub)word-level distribution affect detection performance. We find that even minor adjustments to decoding parameters - such as temperature, top-p, or nucleus sampling - can severely impair detector accuracy, with AUROC dropping from near-perfect levels to 1% in some settings. Our findings expose critical blind spots in current detection methods and emphasize the need for more comprehensive evaluation protocols. To facilitate future research, we release a large-scale dataset encompassing 37 decoding configurations, along with our code and evaluation framework https://github.com/BaggerOfWords/Sampling-and-Detection.

pdf bib
MOSAIC at GENAI Detection Task 3 : Zero-Shot Detection Using an Ensemble of Models
Matthieu Dubois | François Yvon | Pablo Piantanida
Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect)

MOSAIC introduces a new ensemble approach that combines several detector models to spot AI-generated texts. The method enhances the reliability of detection by integrating insights from multiple models, thus addressing the limitations of using a single detector model which often results in performance brittleness. This approach also involves using a theoretically grounded algorithm to minimize the worst-case expected encoding size across models, thereby optimizing the detection process. In this submission, we report evaluation results on the RAID benchmark, a comprehensive English-centric testbed for machine-generated texts. These results were obtained in the context of the “Cross-domain Machine-Generated Text Detection” shared task. We show that our model can be competitive for a variety of domains and generator models, but that it can be challenged by adversarial attacks and by changes in the text generation strategy.

pdf bib
MOSAIC : Mélange d’experts pour la détection de textes artificiels
Matthieu Dubois | Pablo Piantanida | François Yvon
Actes des 32ème Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 1 : articles scientifiques originaux

La diffusion auprès du grand public de grands modèles de langue facilite la production de contenus nuisibles, médisants, malhonnêtes ou falsifiés. En réponse, plusieurs solutions ont été proposées pour identifier les textes ainsi produits, en traitant le problème comme une tâche de classification binaire. Les premières approches reposent sur l’analyse d’un document par un modèle détecteur, avec l’hypothèse qu’un faible score de perplexité indique que le contenu est artificiel. Des méthodes plus récentes proposent de comparer les distributions de probabilité calculées par deux modèles. Cependant, s’appuyer sur une paire fixe de modèles peut fragiliser les performances. Nous étendons ces méthodes en combinant plusieurs modèles et en développant une approche théoriquement fondée pour exploiter au mieux chacun d’entre eux.