Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning

Xiang Zhou, Yixin Nie, Mohit Bansal


Abstract
We introduce distributed NLI, a new NLU task with a goal to predict the distribution of human judgements for natural language inference. We show that by applying additional distribution estimation methods, namely, Monte Carlo (MC) Dropout, Deep Ensemble, Re-Calibration, and Distribution Distillation, models can capture human judgement distribution more effectively than the softmax baseline. We show that MC Dropout is able to achieve decent performance without any distribution annotations while Re-Calibration can give further improvements with extra distribution annotations, suggesting the value of multiple annotations for one example in modeling the distribution of human judgements. Despite these improvements, the best results are still far below the estimated human upper-bound, indicating that predicting the distribution of human judgements is still an open, challenging problem with a large room for improvements. We showcase the common errors for MC Dropout and Re-Calibration. Finally, we give guidelines on the usage of these methods with different levels of data availability and encourage future work on modeling the human opinion distribution for language reasoning.
Anthology ID:
2022.findings-acl.79
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
972–987
Language:
URL:
https://aclanthology.org/2022.findings-acl.79
DOI:
10.18653/v1/2022.findings-acl.79
Bibkey:
Cite (ACL):
Xiang Zhou, Yixin Nie, and Mohit Bansal. 2022. Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 972–987, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning (Zhou et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.79.pdf
Software:
 2022.findings-acl.79.software.zip
Code
 easonnie/ChaosNLI
Data
ChaosNLIMultiNLISNLI