%0 Conference Proceedings %T Reducing Gender Bias in Abusive Language Detection %A Park, Ji Ho %A Shin, Jamin %A Fung, Pascale %Y Riloff, Ellen %Y Chiang, David %Y Hockenmaier, Julia %Y Tsujii, Jun’ichi %S Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing %D 2018 %8 oct nov %I Association for Computational Linguistics %C Brussels, Belgium %F park-etal-2018-reducing %X Abusive language detection models tend to have a problem of being biased toward identity words of a certain group of people because of imbalanced training datasets. For example, “You are a good woman” was considered “sexist” when trained on an existing dataset. Such model bias is an obstacle for models to be robust enough for practical use. In this work, we measure them on models trained with different datasets, while analyzing the effect of different pre-trained word embeddings and model architectures. We also experiment with three mitigation methods: (1) debiased word embeddings, (2) gender swap data augmentation, and (3) fine-tuning with a larger corpus. These methods can effectively reduce model bias by 90-98% and can be extended to correct model bias in other scenarios. %R 10.18653/v1/D18-1302 %U https://aclanthology.org/D18-1302 %U https://doi.org/10.18653/v1/D18-1302 %P 2799-2804