Mei Si


pdf bib
A Corpus for Commonsense Inference in Story Cloze Test
Bingsheng Yao | Ethan Joseph | Julian Lioanag | Mei Si
Proceedings of the Thirteenth Language Resources and Evaluation Conference

The Story Cloze Test (SCT) is designed for training and evaluating machine learning algorithms for narrative understanding and inferences. The SOTA models can achieve over 90% accuracy on predicting the last sentence. However, it has been shown that high accuracy can be achieved by merely using surface-level features. We suspect these models may not truly understand the story. Based on the SCT dataset, we constructed a human-labeled and human-verified commonsense knowledge inference dataset. Given the first four sentences of a story, we asked crowd-source workers to choose from four types of narrative inference for deciding the ending sentence and which sentence contributes most to the inference. We accumulated data on 1871 stories, and three human workers labeled each story. Analysis of the intra-category and inter-category agreements show a high level of consensus. We present two new tasks for predicting the narrative inference categories and contributing sentences. Our results show that transformer-based models can reach SOTA performance on the original SCT task using transfer learning but don’t perform well on these new and more challenging tasks.

pdf bib
Towards a Progression-Aware Autonomous Dialogue Agent
Abraham Sanders | Tomek Strzalkowski | Mei Si | Albert Chang | Deepanshu Dey | Jonas Braasch | Dakuo Wang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Recent advances in large-scale language modeling and generation have enabled the creation of dialogue agents that exhibit human-like responses in a wide range of conversational scenarios spanning a diverse set of tasks, from general chit-chat to focused goal-oriented discourse. While these agents excel at generating high-quality responses that are relevant to prior context, they suffer from a lack of awareness of the overall direction in which the conversation is headed, and the likelihood of task success inherent therein. Thus, we propose a framework in which dialogue agents can evaluate the progression of a conversation toward or away from desired outcomes, and use this signal to inform planning for subsequent responses. Our framework is composed of three key elements: (1) the notion of a “global” dialogue state (GDS) space, (2) a task-specific progression function (PF) computed in terms of a conversation’s trajectory through this space, and (3) a planning mechanism based on dialogue rollouts by which an agent may use progression signals to select its next response.


pdf bib
Cross-media Event Extraction and Recommendation
Di Lu | Clare Voss | Fangbo Tao | Xiang Ren | Rachel Guan | Rostyslav Korolov | Tongtao Zhang | Dongang Wang | Hongzhi Li | Taylor Cassidy | Heng Ji | Shih-fu Chang | Jiawei Han | William Wallace | James Hendler | Mei Si | Lance Kaplan
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations