Noé Durandard


2025

pdf bib
La Boussole Cassée de l’Alignement Politique
Noé Durandard
Actes de l'atelier Ethic and Alignment of (Large) Language Models 2025 (EALM)

L’évaluation, la réglementation et l’alignement des Grands Modèles de Langue (LLM) sur des questions politiques sont devenus des préoccupations cruciales alors que ces technologies se répandent de plus en plus dans tous les secteurs de la société. Cependant, des méthodologies et des fondements théoriques clairs font encore défaut. S’appuyant sur l’œuvre de Converse sur l’opinion publique, nous examinons de manière critique les pratiques courantes d’évaluation idéologique. Nous plaidons également pour des approches alternatives, plus étroites, mieux alignées sur les systèmes de croyances du grand public.

pdf bib
Lattice @MultiGEC-2025: A Spitful Multilingual Language Error Correction System Using LLaMA
Olga Seminck | Yoann Dupont | Mathieu Dehouck | Qi Wang | Noé Durandard | Margo Novikov
Proceedings of the 14th Workshop on Natural Language Processing for Computer Assisted Language Learning

pdf bib
LLMs stick to the point, humans to style: Semantic and Stylistic Alignment in Human and LLM Communication
Noé Durandard | Saurabh Dhawan | Thierry Poibeau
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

This study investigates differences in linguistic accommodation—changes in language use and style that individuals make to align with their dialogue partners—in human and LLM communication. Specifically, it contrasts semantic and stylistic alignment within question-answer pairs in terms of whether the answer was given by a human or an LLM. Utilizing embedding-based measures of linguistic similarity, we find that LLM-generated answers demonstrate higher semantic similarity—reflecting close conceptual alignment with the input questions—but relatively lower stylistic similarity. Human-written answers exhibit a reverse pattern, with lower semantic but higher stylistic similarity to the respective questions. These findings point to contrasting linguistic accommodation strategies evident in human and LLM communication, with implications for furthering personalization, social attunement, and engagement in human-AI dialogue.

pdf bib
Language Style Matching in Large Language Models
Noé Durandard | Saurabh Dhawan | Thierry Poibeau
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Language Style Matching (LSM)—the subconscious alignment of linguistic style between conversational partners—is a key indicator of social coordination in human dialogue. We present the first systematic study of LSM in Large Language Models (LLMs) focusing on two primary objectives: measuring the degree of LSM exhibited in LLM-generated responses and developing techniques to enhance it. First, in order to measure whether LLMs natively show LSM, we computed LIWC-based LSM scores across diverse interaction scenarios and found that LSM scores for text generated by LLMs were either below or near the lower range of such scores observed in human dialogue. Second, we show that LLMs’ adaptive behavior in this regard can be improved using inference-time techniques. We introduce and evaluate an inference-time sampling strategy—Logit-Constrained Generation—which can substantially enhance LSM scores in text generated by an LLM while preserving fluency. By advancing our understanding of LSM in LLMs and proposing effective enhancement strategies, this research contributes to the development of more socially attuned and communicatively adaptive AI systems.

2023

pdf bib
Automatic Annotation of Direct Speech in Written French Narratives
Noé Durandard | Viet Anh Tran | Gaspard Michel | Elena Epure
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The automatic annotation of direct speech (AADS) in written text has been often used in computational narrative understanding. Methods based on either rules or deep neural networks have been explored, in particular for English or German languages. Yet, for French, our target language, not many works exist. Our goal is to create a unified framework to design and evaluate AADS models in French. For this, we consolidated the largest-to-date French narrative dataset annotated with DS per word; we adapted various baselines for sequence labelling or from AADS in other languages; and we designed and conducted an extensive evaluation focused on generalisation. Results show that the task still requires substantial efforts and emphasise characteristics of each baseline. Although this framework could be improved, it is a step further to encourage more research on the topic.