Naeem Ramzan


2022

pdf bib
A Comparative Study on Word Embeddings and Social NLP Tasks
Fatma Elsafoury | Steven R. Wilson | Naeem Ramzan
Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media

In recent years, gray social media platforms, those with a loose moderation policy on cyberbullying, have been attracting more users. Recently, data collected from these types of platforms have been used to pre-train word embeddings (social-media-based), yet these word embeddings have not been investigated for social NLP related tasks. In this paper, we carried out a comparative study between social-media-based and non-social-media-based word embeddings on two social NLP tasks: Detecting cyberbullying and Measuring social bias. Our results show that using social-media-based word embeddings as input features, rather than non-social-media-based embeddings, leads to better cyberbullying detection performance. We also show that some word embeddings are more useful than others for categorizing offensive words. However, we do not find strong evidence that certain word embeddings will necessarily work best when identifying certain categories of cyberbullying within our datasets. Finally, We show even though most of the state-of-the-art bias metrics ranked social-media-based word embeddings as the most socially biased, these results remain inconclusive and further research is required.

pdf bib
SOS: Systematic Offensive Stereotyping Bias in Word Embeddings
Fatma Elsafoury | Steve R. Wilson | Stamos Katsigiannis | Naeem Ramzan
Proceedings of the 29th International Conference on Computational Linguistics

Systematic Offensive stereotyping (SOS) in word embeddings could lead to associating marginalised groups with hate speech and profanity, which might lead to blocking and silencing those groups, especially on social media platforms. In this [id=stk]work, we introduce a quantitative measure of the SOS bias, [id=stk]validate it in the most commonly used word embeddings, and investigate if it explains the performance of different word embeddings on the task of hate speech detection. Results show that SOS bias exists in almost all examined word embeddings and that [id=stk]the proposed SOS bias metric correlates positively with the statistics of published surveys on online extremism. We also show that the [id=stk]proposed metric reveals distinct information [id=stk]compared to established social bias metrics. However, we do not find evidence that SOS bias explains the performance of hate speech detection models based on the different word embeddings.