%0 Conference Proceedings %T Generating Hypothetical Events for Abductive Inference %A Paul, Debjit %A Frank, Anette %Y Ku, Lun-Wei %Y Nastase, Vivi %Y Vulić, Ivan %S Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics %D 2021 %8 August %I Association for Computational Linguistics %C Online %F paul-frank-2021-generating %X Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive NLI task – which consists in choosing the more likely explanation for given observations. We train a specialized language model LMI that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypotheses – events generated by LMI – and b) selecting the one that is most similar to the observed outcome. We show that our MTL model improves over prior vanilla pre-trained LMs fine-tuned on Abductive NLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference. %R 10.18653/v1/2021.starsem-1.6 %U https://aclanthology.org/2021.starsem-1.6 %U https://doi.org/10.18653/v1/2021.starsem-1.6 %P 67-77