Stephan Zheng
2020
ESPRIT: Explaining Solutions to Physical Reasoning Tasks
Nazneen Fatema Rajani
|
Rui Zhang
|
Yi Chern Tan
|
Stephan Zheng
|
Jeremy Weiss
|
Aadit Vyas
|
Abhijit Gupta
|
Caiming Xiong
|
Richard Socher
|
Dragomir Radev
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Neural networks lack the ability to reason about qualitative physics and so cannot generalize to scenarios and tasks unseen during training. We propose ESPRIT, a framework for commonsense reasoning about qualitative physics in natural language that generates interpretable descriptions of physical events. We use a two-step approach of first identifying the pivotal physical events in an environment and then generating natural language descriptions of those events using a data-to-text approach. Our framework learns to generate explanations of how the physical simulation will causally evolve so that an agent or a human can easily reason about a solution using those interpretable descriptions. Human evaluations indicate that ESPRIT produces crucial fine-grained details and has high coverage of physical concepts compared to even human annotations. Dataset, code and documentation are available at https://github.com/salesforce/esprit.
Sketch-Fill-A-R: A Persona-Grounded Chit-Chat Generation Framework
Michael Shum
|
Stephan Zheng
|
Wojciech Kryscinski
|
Caiming Xiong
|
Richard Socher
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI
Human-like chit-chat conversation requires agents to generate responses that are fluent, engaging and consistent. We propose Sketch- Fill-A-R, a framework that uses a persona-memory to generate chit-chat responses in three phases. First, it generates dynamic sketch responses with open slots. Second, it generates candidate responses by filling slots with parts of its stored persona traits. Lastly, it ranks and selects the final response via a language model score. Sketch-Fill-A-R outperforms a state-of-the-art baseline both quantitatively (10-point lower perplexity) and qualitatively (preferred by 55% in head-to-head single-turn studies and 20% higher in consistency in multi-turn user studies) on the Persona-Chat dataset. Finally, we extensively analyze Sketch-Fill-A-R’s responses and human feedback, and show it is more consistent and engaging by using more relevant responses and questions.
Search
Co-authors
- Caiming Xiong 2
- Richard Socher 2
- Nazneen Fatema Rajani 1
- Rui Zhang 1
- Yi Chern Tan 1
- show all...