QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios

Timo Pierre Schrader, Lukas Lange, Simon Razniewski, Annemarie Friedrich


Abstract
Reasoning is key to many decision making processes. It requires consolidating a set of rule-like premises that are often associated with degrees of uncertainty and observations to draw conclusions. In this work, we address both the case where premises are specified as numeric probabilistic rules and situations in which humans state their estimates using words expressing degrees of certainty. Existing probabilistic reasoning datasets simplify the task, e.g., by requiring the model to only rank textual alternatives, by including only binary random variables, or by making use of a limited set of templates that result in less varied text.In this work, we present QUITE, a question answering dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships. QUITE provides high-quality natural language verbalizations of premises together with evidence statements and expects the answer to a question in the form of an estimated probability. We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types (causal, evidential, and explaining-away). Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning. We release QUITE and code for training and experiments on Github.
Anthology ID:
2024.emnlp-main.153
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2634–2652
Language:
URL:
https://aclanthology.org/2024.emnlp-main.153
DOI:
10.18653/v1/2024.emnlp-main.153
Bibkey:
Cite (ACL):
Timo Pierre Schrader, Lukas Lange, Simon Razniewski, and Annemarie Friedrich. 2024. QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2634–2652, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios (Schrader et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.153.pdf
Data:
 2024.emnlp-main.153.data.zip