%0 Conference Proceedings %T Memorisation versus Generalisation in Pre-trained Language Models %A Tänzer, Michael %A Ruder, Sebastian %A Rei, Marek %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F tanzer-etal-2022-memorisation %X State-of-the-art pre-trained language models have been shown to memorise facts and perform well with limited amounts of training data. To gain a better understanding of how these models learn, we study their generalisation and memorisation capabilities in noisy and low-resource scenarios. We find that the training of these models is almost unaffected by label noise and that it is possible to reach near-optimal results even on extremely noisy datasets. However, our experiments also show that they mainly learn from high-frequency patterns and largely fail when tested on low-resource tasks such as few-shot learning and rare entity recognition. To mitigate such limitations, we propose an extension based on prototypical networks that improves performance in low-resource named entity recognition tasks. %R 10.18653/v1/2022.acl-long.521 %U https://aclanthology.org/2022.acl-long.521 %U https://doi.org/10.18653/v1/2022.acl-long.521 %P 7564-7578