Exploring the Learning Capabilities of Language Models using LEVERWORLDS

Eitan Wagner, Amir Feder, Omri Abend


Abstract
Learning a model of a stochastic setting often involves learning both general structure rules and specific properties of the instance. This paper investigates the interplay between learning the general and the specific in various learning methods, with emphasis on sample efficiency. We design a framework called LEVERWORLDS, which allows the generation of simple physics-inspired worlds that follow a similar generative process with different distributions, and their instances can be expressed in natural language. These worlds allow for controlled experiments to assess the sample complexity of different learning methods. We experiment with classic learning algorithms as well as Transformer language models, both with fine-tuning and In-Context Learning (ICL). Our general finding is that (1) Transformers generally succeed in the task; but (2) they are considerably less sample efficient than classic methods that make stronger assumptions about the structure, such as Maximum Likelihood Estimation and Logistic Regression. This finding is in tension with the recent tendency to use Transformers as general-purpose estimators. We propose an approach that leverages the ICL capabilities of contemporary language models to apply simple algorithms for this type of data. Our experiments show that models currently struggle with the task but show promising potential.
Anthology ID:
2024.emnlp-main.865
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15458–15468
Language:
URL:
https://aclanthology.org/2024.emnlp-main.865
DOI:
Bibkey:
Cite (ACL):
Eitan Wagner, Amir Feder, and Omri Abend. 2024. Exploring the Learning Capabilities of Language Models using LEVERWORLDS. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15458–15468, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Exploring the Learning Capabilities of Language Models using LEVERWORLDS (Wagner et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.865.pdf