Learning to Explain: Answering Why-Questions via Rephrasing

Allen Nie, Erin Bennett, Noah Goodman


Abstract
Providing plausible responses to why questions is a challenging but critical goal for language based human-machine interaction. Explanations are challenging in that they require many different forms of abstract knowledge and reasoning. Previous work has either relied on human-curated structured knowledge bases or detailed domain representation to generate satisfactory explanations. They are also often limited to ranking pre-existing explanation choices. In our work, we contribute to the under-explored area of generating natural language explanations for general phenomena. We automatically collect large datasets of explanation-phenomenon pairs which allow us to train sequence-to-sequence models to generate natural language explanations. We compare different training strategies and evaluate their performance using both automatic scores and human ratings. We demonstrate that our strategy is sufficient to generate highly plausible explanations for general open-domain phenomena compared to other models trained on different datasets.
Anthology ID:
W19-4113
Volume:
Proceedings of the First Workshop on NLP for Conversational AI
Month:
August
Year:
2019
Address:
Florence, Italy
Venues:
ACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
113–120
Language:
URL:
https://aclanthology.org/W19-4113
DOI:
10.18653/v1/W19-4113
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/W19-4113.pdf
Code
 windweller/L2EWeb
Data
BookCorpusCOPAWSC