%0 Conference Proceedings %T Reducing Model Churn: Stable Re-training of Conversational Agents %A Hidey, Christopher %A Liu, Fei %A Goel, Rahul %Y Lemon, Oliver %Y Hakkani-Tur, Dilek %Y Li, Junyi Jessy %Y Ashrafzadeh, Arash %Y Garcia, Daniel Hernández %Y Alikhani, Malihe %Y Vandyke, David %Y Dušek, Ondřej %S Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue %D 2022 %8 September %I Association for Computational Linguistics %C Edinburgh, UK %F hidey-etal-2022-reducing %X Retraining modern deep learning systems can lead to variations in model performance even when trained using the same data and hyper-parameters by simply using different random seeds. This phenomenon is known as model churn or model jitter. This issue is often exacerbated in real world settings, where noise may be introduced in the data collection process. In this work we tackle the problem of stable retraining with a novel focus on structured prediction for conversational semantic parsing. We first quantify the model churn by introducing metrics for agreement between predictions across multiple retrainings. Next, we devise realistic scenarios for noise injection and demonstrate the effectiveness of various churn reduction techniques such as ensembling and distillation. Lastly, we discuss practical trade-offs between such techniques and show that co-distillation provides a sweet spot in terms of churn reduction with only a modest increase in resource usage. %R 10.18653/v1/2022.sigdial-1.2 %U https://aclanthology.org/2022.sigdial-1.2 %U https://doi.org/10.18653/v1/2022.sigdial-1.2 %P 14-25