Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses

Liyan Tang, Yifan Peng, Yanshan Wang, Ying Ding, Greg Durrett, Justin Rousseau


Abstract
A human decision-maker benefits the most from an AI assistant that corrects for their biases. For problems such as generating interpretation of a radiology report given findings, a system predicting only highly likely outcomes may be less useful, where such outcomes are already obvious to the user. To alleviate biases in human decision-making, it is worth considering a broad differential diagnosis, going beyond the most likely options. We introduce a new task, “less likely brainstorming,” that asks a model to generate outputs that humans think are relevant but less likely to happen. We explore the task in two settings: a brain MRI interpretation generation setting and an everyday commonsense reasoning setting. We found that a baseline approach of training with less likely hypotheses as targets generates outputs that humans evaluate as either likely or irrelevant nearly half of the time; standard MLE training is not effective. To tackle this problem, we propose a controlled text generation method that uses a novel contrastive learning strategy to encourage models to differentiate between generating likely and less likely outputs according to humans. We compare our method with several state-of-the-art controlled text generation models via automatic and human evaluations and show that our models’ capability of generating less likely outputs is improved.
Anthology ID:
2023.findings-acl.794
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12532–12555
Language:
URL:
https://aclanthology.org/2023.findings-acl.794
DOI:
10.18653/v1/2023.findings-acl.794
Bibkey:
Cite (ACL):
Liyan Tang, Yifan Peng, Yanshan Wang, Ying Ding, Greg Durrett, and Justin Rousseau. 2023. Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses. In Findings of the Association for Computational Linguistics: ACL 2023, pages 12532–12555, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses (Tang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.794.pdf