Saurabh Dhawan


2025

pdf bib
LLMs stick to the point, humans to style: Semantic and Stylistic Alignment in Human and LLM Communication
Noé Durandard | Saurabh Dhawan | Thierry Poibeau
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

This study investigates differences in linguistic accommodation—changes in language use and style that individuals make to align with their dialogue partners—in human and LLM communication. Specifically, it contrasts semantic and stylistic alignment within question-answer pairs in terms of whether the answer was given by a human or an LLM. Utilizing embedding-based measures of linguistic similarity, we find that LLM-generated answers demonstrate higher semantic similarity—reflecting close conceptual alignment with the input questions—but relatively lower stylistic similarity. Human-written answers exhibit a reverse pattern, with lower semantic but higher stylistic similarity to the respective questions. These findings point to contrasting linguistic accommodation strategies evident in human and LLM communication, with implications for furthering personalization, social attunement, and engagement in human-AI dialogue.

pdf bib
Language Style Matching in Large Language Models
Noé Durandard | Saurabh Dhawan | Thierry Poibeau
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Language Style Matching (LSM)—the subconscious alignment of linguistic style between conversational partners—is a key indicator of social coordination in human dialogue. We present the first systematic study of LSM in Large Language Models (LLMs) focusing on two primary objectives: measuring the degree of LSM exhibited in LLM-generated responses and developing techniques to enhance it. First, in order to measure whether LLMs natively show LSM, we computed LIWC-based LSM scores across diverse interaction scenarios and found that LSM scores for text generated by LLMs were either below or near the lower range of such scores observed in human dialogue. Second, we show that LLMs’ adaptive behavior in this regard can be improved using inference-time techniques. We introduce and evaluate an inference-time sampling strategy—Logit-Constrained Generation—which can substantially enhance LSM scores in text generated by an LLM while preserving fluency. By advancing our understanding of LSM in LLMs and proposing effective enhancement strategies, this research contributes to the development of more socially attuned and communicatively adaptive AI systems.