Jake Garrison
2024
What Are the Odds? Language Models Are Capable of Probabilistic Reasoning
Akshay Paruchuri
|
Jake Garrison
|
Shun Liao
|
John Hernandez
|
Jacob Sunshine
|
Tim Althoff
|
Xin Liu
|
Daniel McDuff
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Language models (LM) are capable of remarkably complex linguistic tasks; however, numerical reasoning is an area in which they frequently struggle. An important but rarely evaluated form of reasoning is understanding probability distributions. In this paper, we focus on evaluating the probabilistic reasoning capabilities of LMs using idealized and real-world statistical distributions. We perform a systematic evaluation of state-of-the-art LMs on three tasks: estimating percentiles, drawing samples, and calculating probabilities. We evaluate three ways to provide context to LMs 1) anchoring examples from within a distribution or family of distributions, 2) real-world context, 3) summary statistics on which to base a Normal approximation. Models can make inferences about distributions, and can be further aided by the incorporation of real-world context, example shots and simplified assumptions, even if these assumptions are incorrect or misspecified. To conduct this work, we developed a comprehensive benchmark distribution dataset with associated question-answer pairs that we have released publicly.
Search
Co-authors
- Akshay Paruchuri 1
- Shun Liao 1
- John Hernandez 1
- Jacob Sunshine 1
- Tim Althoff 1
- show all...