Hayden Helm


2024

pdf bib
Tracking the perspectives of interacting language models
Hayden Helm | Brandon Duderstadt | Youngser Park | Carey Priebe
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Large language models (LLMs) are capable of producing high quality information at unprecedented rates. As these models continue to entrench themselves in society, the content they produce will become increasingly pervasive in databases that are, in turn, incorporated into the pre-training data, fine-tuning data, retrieval data, etc. of other language models. In this paper we formalize the idea of a communication network of LLMs and introduce a method for representing the perspective of individual models within a collection of LLMs. Given these tools we systematically study information diffusion in the communication network of LLMs in various simulated settings.