Asma Ghandeharioun
2025
Racing Thoughts: Explaining Contextualization Errors in Large Language Models
Michael A. Lepori
|
Michael Curtis Mozer
|
Asma Ghandeharioun
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
The profound success of transformer-based language models can largely be attributed to their ability to integrate relevant contextual information from an input sequence in order to generate a response or complete a task. However, we know very little about the algorithms that a model employs to implement this capability, nor do we understand their failure modes. For example, given the prompt “John is going fishing, so he walks over to the bank. Can he make an ATM transaction?”, a model may incorrectly respond “Yes” if it has not properly contextualized “bank” as a geographical feature, rather than a financial institution. We propose the LLM Race Conditions Hypothesis as an explanation of contextualization errors of this form. This hypothesis identifies dependencies between tokens (e.g., “bank” must be properly contextualized before the final token, "?", integrates information from “bank”), and claims that contextualization errors are a result of violating these dependencies. Using a variety of techniques from mechanistic interpretability, we provide correlational and causal evidence in support of the hypothesis and suggest inference-time interventions to address it.
2020
Human-centric dialog training via offline reinforcement learning
Natasha Jaques
|
Judy Hanwen Shen
|
Asma Ghandeharioun
|
Craig Ferguson
|
Agata Lapedriza
|
Noah Jones
|
Shixiang Gu
|
Rosalind Picard
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
How can we train a dialog model to produce better conversations by learning from human feedback, without the risk of humans teaching it harmful chat behaviors? We start by hosting models online, and gather human feedback from real-time, open-ended conversations, which we then use to train and improve the models using offline reinforcement learning (RL). We identify implicit conversational cues including language similarity, elicitation of laughter, sentiment, and more, which indicate positive human feedback, and embed these in multiple reward functions. A well-known challenge is that learning an RL policy in an offline setting usually fails due to the lack of ability to explore and the tendency to make over-optimistic estimates of future reward. These problems become even harder when using RL for language models, which can easily have a 20,000 action vocabulary and many possible reward functions. We solve the challenge by developing a novel class of offline RL algorithms. These algorithms use KL-control to penalize divergence from a pre-trained prior language model, and use a new strategy to make the algorithm pessimistic, instead of optimistic, in the face of uncertainty. We test the resulting dialog model with ratings from 80 users in an open-domain setting and find it achieves significant improvements over existing deep offline RL approaches. The novel offline RL method is viable for improving any existing generative dialog model using a static dataset of human feedback.
Search
Fix data
Co-authors
- Craig Ferguson 1
- Shixiang Gu 1
- Natasha Jaques 1
- Noah Jones 1
- Agata Lapedriza 1
- show all...