%0 Conference Proceedings %T Reducing Non-Normative Text Generation from Language Models %A Peng, Xiangyu %A Li, Siyan %A Frazier, Spencer %A Riedl, Mark %Y Davis, Brian %Y Graham, Yvette %Y Kelleher, John %Y Sripada, Yaji %S Proceedings of the 13th International Conference on Natural Language Generation %D 2020 %8 December %I Association for Computational Linguistics %C Dublin, Ireland %F peng-etal-2020-reducing %X Large-scale, transformer-based language models such as GPT-2 are pretrained on diverse corpora scraped from the internet. Consequently, they are prone to generating non-normative text (i.e. in violation of social norms). We introduce a technique for fine-tuning GPT-2, using a policy gradient reinforcement learning technique and a normative text classifier to produce reward and punishment values. We evaluate our technique on five data sets using automated and human participant experiments. The normative text classifier is 81-90% accurate when compared to gold-standard human judgements of normative and non-normative generated text. Our normative fine-tuning technique is able to reduce non-normative text by 27-61%, depending on the data set. %R 10.18653/v1/2020.inlg-1.43 %U https://aclanthology.org/2020.inlg-1.43 %U https://doi.org/10.18653/v1/2020.inlg-1.43 %P 374-383