%0 Conference Proceedings %T Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System Improvement %A Dalvi Mishra, Bhavana %A Tafjord, Oyvind %A Clark, Peter %Y Goldberg, Yoav %Y Kozareva, Zornitsa %Y Zhang, Yue %S Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing %D 2022 %8 December %I Association for Computational Linguistics %C Abu Dhabi, United Arab Emirates %F dalvi-mishra-etal-2022-towards %X Our goal is a teachable reasoning system for question-answering (QA), where a user can interact with faithful answer explanations, and correct its errors so that the system improves over time. Our approach is to augment a QA model with a dynamic memory of user feedback, containing user-supplied corrections toerroneous model beliefs that users identify during interaction. Retrievals from memory are used as additional context for QA, to help avoid previous mistakes in similar new situations - a novel application of memory-based continuous learning. With simulated feedback, we find that our system (called TeachMe) continually improves with time, and without model retraining, requiring feedback on only 25% of training examples to reach within 1% of the upper-bound (feedback on all examples). Similarly, in experiments with real users, we observe a similar trend, with performance improving by over 15% on a hidden test set after teaching. This suggests new opportunities for using frozen language models in an interactive setting where users can inspect, debug, and correct the model’s beliefs, leading to improved system’s performance over time. %R 10.18653/v1/2022.emnlp-main.644 %U https://aclanthology.org/2022.emnlp-main.644 %U https://doi.org/10.18653/v1/2022.emnlp-main.644 %P 9465-9480