ConvoSense: Overcoming Monotonous Commonsense Inferences for Conversational AI

Sarah E. Finch, Jinho D. Choi


Abstract
Mastering commonsense understanding and reasoning is a pivotal skill essential for conducting engaging conversations. While there have been several attempts to create datasets that facilitate commonsense inferences in dialogue contexts, existing datasets tend to lack in-depth details, restate information already present in the conversation, and often fail to capture the multifaceted nature of commonsense reasoning. In response to these limitations, we compile a new synthetic dataset for commonsense reasoning in dialogue contexts using GPT, ℂonvoSense, that boasts greater contextual novelty, offers a higher volume of inferences per example, and substantially enriches the detail conveyed by the inferences. Our dataset contains over 500,000 inferences across 12,000 dialogues with 10 popular inference types, which empowers the training of generative commonsense models for dialogue that are superior in producing plausible inferences with high novelty when compared to models trained on the previous datasets. To the best of our knowledge, ℂonvoSense is the first of its kind to provide such a multitude of novel inferences at such a large scale.
Anthology ID:
2024.tacl-1.26
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
467–483
Language:
URL:
https://aclanthology.org/2024.tacl-1.26
DOI:
10.1162/tacl_a_00659
Bibkey:
Cite (ACL):
Sarah E. Finch and Jinho D. Choi. 2024. ConvoSense: Overcoming Monotonous Commonsense Inferences for Conversational AI. Transactions of the Association for Computational Linguistics, 12:467–483.
Cite (Informal):
ConvoSense: Overcoming Monotonous Commonsense Inferences for Conversational AI (Finch & Choi, TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.26.pdf