Pierre-Yves Oudeyer


2025

pdf bib
Reinforcement Learning for Aligning Large Language Models Agents with Interactive Environments: Quantifying and Mitigating Prompt Overfitting
Mohamed Salim Aissi | Clément Romac | Thomas Carta | Sylvain Lamprier | Pierre-Yves Oudeyer | Olivier Sigaud | Laure Soulier | Nicolas Thome
Findings of the Association for Computational Linguistics: NAACL 2025

Reinforcement learning (RL) is a promising approach for aligning large language models (LLMs) knowledge with sequential decision-making tasks. However, few studies have thoroughly investigated the impact on LLM agents capabilities of fine-tuning them with RL in a specific environment. In this paper, we propose a novel framework to analyze the sensitivity of LLMs to prompt formulations following RL training in a textual environment. Our findings reveal that the performance of LLMs degrades when faced with prompt formulations different from those used during the RL training phase. Besides, we analyze the source of this sensitivity by examining the model’s internal representations and salient tokens. Finally, we propose to use a contrastive loss to mitigate this sensitivity and improve the robustness and generalization capabilities of LLMs.

2023

pdf bib
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
Xingdi Yuan | Tong Wang | Yen-Hsiang Wang | Emery Fine | Rania Abdelghani | Hélène Sauzéon | Pierre-Yves Oudeyer
Findings of the Association for Computational Linguistics: ACL 2023

Large Language Models (LLMs) have in recent years demonstrated impressive prowess in natural language generation. A common practice to improve generation diversity is to sample multiple outputs from the model. However, partly due to the inaccessibility of LLMs, there lacks a simple and robust way of selecting the best output from these stochastic samples. As a case study framed in the context of question generation, we propose two prompt-based approaches, namely round-trip and prompt-based score, to selecting high-quality questions from a set of LLM-generated candidates. Our method works without the need to modify the underlying model, nor does it rely on human-annotated references — both of which are realistic constraints for real-world deployment of LLMs. With automatic as well as human evaluations, we empirically demonstrate that our approach can effectively select questions of higher qualities than greedy generation.

2022

pdf bib
Automatic Exploration of Textual Environments with Language-Conditioned Autotelic Agents
Laetitia Teodorescu | Xingdi Yuan | Marc-Alexandre Côté | Pierre-Yves Oudeyer
Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022)

The purpose of this extended abstract is to discuss the possible fruitful interactions between intrinsically-motivated language-conditioned agents and textual environments. We define autotelic agents as agents able to set their own goals. We identify desirable properties of textual nenvironments that makes them a good testbed for autotelic agents. We them list drivers of exploration for such agents that would allow them to achieve large repertoires of skills in these environments, enabling such agents to be repurposed for solving the benchmarks implemented in textual environments. We then discuss challenges and further perspectives brought about by this interaction.