%0 Conference Proceedings %T Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? %A Choenni, Rochelle %A Shutova, Ekaterina %A van Rooij, Robert %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F choenni-etal-2021-stepmothers %X In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained language models in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner. To demonstrate how our methods can be used to analyze emotion and stereotype shifts due to linguistic experience, we use fine-tuning on news sources as a case study. Our experiments expose how attitudes towards different social groups vary across models and how quickly emotions and stereotypes can shift at the fine-tuning stage. %R 10.18653/v1/2021.emnlp-main.111 %U https://aclanthology.org/2021.emnlp-main.111 %U https://doi.org/10.18653/v1/2021.emnlp-main.111 %P 1477-1491