Personas as a Way to Model Truthfulness in Language Models

Nitish Joshi, Javier Rando, Abulhair Saparov, Najoung Kim, He He


Abstract
Large language models (LLMs) are trained on vast amounts of text from the internet, which contains both factual and misleading information about the world. While unintuitive from a classic view of LMs, recent work has shown that the truth value of a statement can be elicited from the model’s representations. This paper presents an explanation for why LMs appear to know the truth despite not being trained with truth labels. We hypothesize that the pretraining data is generated by groups of (un)truthful agents whose outputs share common features, and they form a (un)truthful persona. By training on this data, LMs can infer and represent the persona in its activation space. This allows the model to separate truth from falsehoods and controls the truthfulness of its generation. We show evidence for the persona hypothesis via two observations: (1) we can probe whether a model’s answer will be truthful before it is generated; (2) finetuning a model on a set of facts improves its truthfulness on unseen topics. Next, using arithmetics as a synthetic environment, we show that structures of the pretraining data are crucial for the model to infer the truthful persona. Overall, our findings suggest that models can exploit hierarchical structures in the data to learn abstract concepts like truthfulness.
Anthology ID:
2024.emnlp-main.364
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6346–6359
Language:
URL:
https://aclanthology.org/2024.emnlp-main.364
DOI:
10.18653/v1/2024.emnlp-main.364
Bibkey:
Cite (ACL):
Nitish Joshi, Javier Rando, Abulhair Saparov, Najoung Kim, and He He. 2024. Personas as a Way to Model Truthfulness in Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 6346–6359, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Personas as a Way to Model Truthfulness in Language Models (Joshi et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.364.pdf