%0 Conference Proceedings %T Improving Meta-learning for Low-resource Text Classification and Generation via Memory Imitation %A Zhao, Yingxiu %A Tian, Zhiliang %A Yao, Huaxiu %A Zheng, Yinhe %A Lee, Dongkyu %A Song, Yiping %A Sun, Jian %A Zhang, Nevin %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F zhao-etal-2022-improving %X Building models of natural language processing (NLP) is challenging in low-resource scenarios where limited data are available. Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks. Nonetheless, these approaches suffer from the memorization overfitting issue, where the model tends to memorize the meta-training tasks while ignoring support sets when adapting to new tasks. To address this issue, we propose a memory imitation meta-learning (MemIML) method that enhances the model’s reliance on support sets for task adaptation. Specifically, we introduce a task-specific memory module to store support set information and construct an imitation module to force query sets to imitate the behaviors of support sets stored in the memory. A theoretical analysis is provided to prove the effectiveness of our method, and empirical results also demonstrate that our method outperforms competitive baselines on both text classification and generation tasks. %R 10.18653/v1/2022.acl-long.44 %U https://aclanthology.org/2022.acl-long.44 %U https://doi.org/10.18653/v1/2022.acl-long.44 %P 583-595