Making Large Language Models into World Models with Precondition and Effect Knowledge

Kaige Xie, Ian Yang, John Gunerli, Mark Riedl


Abstract
World models, which encapsulate the dynamics of how actions affect environments, are foundational to the functioning of intelligent agents. In this work, we explore the potential of Large Language Models (LLMs) to operate as world models. Although LLMs are not inherently designed to model real-world dynamics, we show that they can be induced to perform two critical world model functions: determining the applicability of an action based on a given world state, and predicting the resulting world state upon action execution. This is achieved by fine-tuning two separate LLMs—one for precondition prediction and another for effect prediction—while leveraging synthetic data generation techniques. Through human-participant studies, we validate that the precondition and effect knowledge generated by our models aligns with human understanding of world dynamics. We also analyze the extent to which the world model trained on our synthetic data results in an inferred state space that supports the creation of action chains, a necessary property for planning.
Anthology ID:
2025.coling-main.503
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7532–7545
Language:
URL:
https://aclanthology.org/2025.coling-main.503/
DOI:
Bibkey:
Cite (ACL):
Kaige Xie, Ian Yang, John Gunerli, and Mark Riedl. 2025. Making Large Language Models into World Models with Precondition and Effect Knowledge. In Proceedings of the 31st International Conference on Computational Linguistics, pages 7532–7545, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Making Large Language Models into World Models with Precondition and Effect Knowledge (Xie et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.503.pdf