Learning to repair: Repairing model output errors after deployment using a dynamic memory of feedback

Niket Tandon, Aman Madaan, Peter Clark, Yiming Yang


Abstract
Large language models (LMs), while powerful, are not immune to mistakes, but can be difficult to retrain. Our goal is for an LM to continue to improve after deployment, without retraining, using feedback from the user. Our approach pairs an LM with (i) a growing memory of cases where the user identified an output error and provided general feedback on how to correct it (ii) a corrector model, trained to translate this general feedback into specific edits to repair the model output. Given a new, unseen input, our model can then use feedback from similar, past cases to repair output errors that may occur. We instantiate our approach using an existing, fixed model for script generation, that takes a goal (e.g., “bake a cake”) and generates a partially ordered sequence of actions to achieve that goal, sometimes containing errors. Our memory-enhanced system, , learns to apply user feedback to repair such errors (up to 30 points improvement), while making a start at avoiding similar past mistakes on new, unseen examples (up to 7 points improvement in a controlled setting). This is a first step towards strengthening deployed models, potentially broadening their utility. Our code and data is available at https://github.com/allenai/interscript
Anthology ID:
2022.findings-naacl.26
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
339–352
Language:
URL:
https://aclanthology.org/2022.findings-naacl.26
DOI:
10.18653/v1/2022.findings-naacl.26
Bibkey:
Cite (ACL):
Niket Tandon, Aman Madaan, Peter Clark, and Yiming Yang. 2022. Learning to repair: Repairing model output errors after deployment using a dynamic memory of feedback. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 339–352, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Learning to repair: Repairing model output errors after deployment using a dynamic memory of feedback (Tandon et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.26.pdf
Video:
 https://aclanthology.org/2022.findings-naacl.26.mp4
Code
 allenai/interscript