Modeling Event Plausibility with Consistent Conceptual Abstraction

Ian Porada, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung


Abstract
Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events. While distributional models—most recently pre-trained, Transformer language models—have demonstrated improvements in modeling event plausibility, their performance still falls short of humans’. In this work, we show that Transformer-based plausibility models are markedly inconsistent across the conceptual classes of a lexical hierarchy, inferring that “a person breathing” is plausible while “a dentist breathing” is not, for example. We find this inconsistency persists even when models are softly injected with lexical knowledge, and we present a simple post-hoc method of forcing model consistency that improves correlation with human plausibility judgements.
Anthology ID:
2021.naacl-main.138
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1732–1743
Language:
URL:
https://aclanthology.org/2021.naacl-main.138
DOI:
10.18653/v1/2021.naacl-main.138
Bibkey:
Cite (ACL):
Ian Porada, Kaheer Suleman, Adam Trischler, and Jackie Chi Kit Cheung. 2021. Modeling Event Plausibility with Consistent Conceptual Abstraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1732–1743, Online. Association for Computational Linguistics.
Cite (Informal):
Modeling Event Plausibility with Consistent Conceptual Abstraction (Porada et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.138.pdf
Video:
 https://aclanthology.org/2021.naacl-main.138.mp4
Code
 ianporada/modeling_event_plausibility